Jan 30 10:11:34 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 10:11:34 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 10:11:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 10:11:35 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 10:11:35 crc kubenswrapper[4984]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 10:11:35 crc kubenswrapper[4984]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 10:11:35 crc kubenswrapper[4984]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 10:11:35 crc kubenswrapper[4984]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 10:11:35 crc kubenswrapper[4984]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 10:11:35 crc kubenswrapper[4984]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.847174 4984 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852533 4984 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852561 4984 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852570 4984 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852581 4984 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852590 4984 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852599 4984 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852609 4984 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852620 4984 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852629 4984 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852638 4984 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852658 4984 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852667 4984 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852675 4984 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852683 4984 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852692 4984 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852700 4984 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852707 4984 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852715 4984 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852723 4984 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852731 4984 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852739 4984 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852747 4984 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852755 4984 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852763 4984 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852771 4984 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852779 4984 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852787 4984 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852794 4984 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852802 4984 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852810 4984 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852818 4984 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852826 4984 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852833 4984 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852841 4984 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852848 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852856 4984 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852864 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852871 4984 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852879 4984 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852887 4984 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852895 4984 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.852988 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853000 4984 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853011 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853021 4984 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853031 4984 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853041 4984 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853050 4984 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853059 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853067 4984 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853075 4984 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853083 4984 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853091 4984 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853101 4984 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853112 4984 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853120 4984 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853130 4984 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853139 4984 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853148 4984 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853156 4984 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853166 4984 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853176 4984 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853185 4984 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853193 4984 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853201 4984 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853210 4984 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853219 4984 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853229 4984 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853237 4984 feature_gate.go:330] unrecognized feature gate: Example Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853269 4984 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.853279 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853452 4984 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853468 4984 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853485 4984 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853496 4984 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853507 4984 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853516 4984 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853529 4984 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853540 4984 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853549 4984 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853558 4984 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853568 4984 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853577 4984 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853587 4984 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853596 4984 flags.go:64] FLAG: --cgroup-root="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853605 4984 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853614 4984 flags.go:64] FLAG: --client-ca-file="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853623 4984 flags.go:64] FLAG: --cloud-config="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853632 4984 flags.go:64] FLAG: --cloud-provider="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853641 4984 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853651 4984 flags.go:64] FLAG: --cluster-domain="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853659 4984 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853669 4984 flags.go:64] FLAG: --config-dir="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853678 4984 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853688 4984 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853699 4984 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853708 4984 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853717 4984 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853727 4984 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853736 4984 flags.go:64] FLAG: --contention-profiling="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853744 4984 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853753 4984 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853762 4984 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853773 4984 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853783 4984 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853792 4984 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853801 4984 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853810 4984 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853819 4984 flags.go:64] FLAG: --enable-server="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853828 4984 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853840 4984 flags.go:64] FLAG: --event-burst="100" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853849 4984 flags.go:64] FLAG: --event-qps="50" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853858 4984 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853868 4984 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853877 4984 flags.go:64] FLAG: --eviction-hard="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853887 4984 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853896 4984 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853905 4984 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853916 4984 flags.go:64] FLAG: --eviction-soft="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853925 4984 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853933 4984 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853942 4984 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853951 4984 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853960 4984 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853969 4984 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853978 4984 flags.go:64] FLAG: --feature-gates="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853988 4984 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.853998 4984 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854007 4984 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854017 4984 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854026 4984 flags.go:64] FLAG: --healthz-port="10248" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854035 4984 flags.go:64] FLAG: --help="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854044 4984 flags.go:64] FLAG: --hostname-override="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854052 4984 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854062 4984 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854071 4984 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854079 4984 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854088 4984 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854097 4984 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854107 4984 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854116 4984 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854124 4984 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854133 4984 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854143 4984 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854153 4984 flags.go:64] FLAG: --kube-reserved="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854163 4984 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854171 4984 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854180 4984 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854189 4984 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854198 4984 flags.go:64] FLAG: --lock-file="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854207 4984 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854216 4984 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854225 4984 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854238 4984 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854271 4984 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854280 4984 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854289 4984 flags.go:64] FLAG: --logging-format="text" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854299 4984 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854309 4984 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854317 4984 flags.go:64] FLAG: --manifest-url="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854326 4984 flags.go:64] FLAG: --manifest-url-header="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854338 4984 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854347 4984 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854358 4984 flags.go:64] FLAG: --max-pods="110" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854367 4984 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854376 4984 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854386 4984 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854395 4984 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854404 4984 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854413 4984 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854422 4984 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854440 4984 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854450 4984 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854459 4984 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854468 4984 flags.go:64] FLAG: --pod-cidr="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854485 4984 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854499 4984 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854508 4984 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854517 4984 flags.go:64] FLAG: --pods-per-core="0" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854526 4984 flags.go:64] FLAG: --port="10250" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854535 4984 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854544 4984 flags.go:64] FLAG: --provider-id="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854552 4984 flags.go:64] FLAG: --qos-reserved="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854561 4984 flags.go:64] FLAG: --read-only-port="10255" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854570 4984 flags.go:64] FLAG: --register-node="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854579 4984 flags.go:64] FLAG: --register-schedulable="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854588 4984 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854602 4984 flags.go:64] FLAG: --registry-burst="10" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854611 4984 flags.go:64] FLAG: --registry-qps="5" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854620 4984 flags.go:64] FLAG: --reserved-cpus="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854629 4984 flags.go:64] FLAG: --reserved-memory="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854640 4984 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854649 4984 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854658 4984 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854667 4984 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854676 4984 flags.go:64] FLAG: --runonce="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854685 4984 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854694 4984 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854703 4984 flags.go:64] FLAG: --seccomp-default="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854712 4984 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854720 4984 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854729 4984 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854738 4984 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854747 4984 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854756 4984 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854765 4984 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854774 4984 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854783 4984 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854792 4984 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854801 4984 flags.go:64] FLAG: --system-cgroups="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854809 4984 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854824 4984 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854833 4984 flags.go:64] FLAG: --tls-cert-file="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854842 4984 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854852 4984 flags.go:64] FLAG: --tls-min-version="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854861 4984 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854870 4984 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854879 4984 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854888 4984 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854896 4984 flags.go:64] FLAG: --v="2" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854908 4984 flags.go:64] FLAG: --version="false" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854919 4984 flags.go:64] FLAG: --vmodule="" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854930 4984 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.854939 4984 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856053 4984 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856073 4984 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856084 4984 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856094 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856105 4984 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856113 4984 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856122 4984 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856143 4984 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856151 4984 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856160 4984 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856169 4984 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856177 4984 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856185 4984 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856193 4984 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856201 4984 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856210 4984 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856220 4984 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856229 4984 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856239 4984 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856274 4984 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856285 4984 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856294 4984 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856303 4984 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856323 4984 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856332 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856340 4984 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856348 4984 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856355 4984 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856364 4984 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856372 4984 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856380 4984 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856388 4984 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856395 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856403 4984 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856411 4984 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856419 4984 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856427 4984 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856434 4984 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856442 4984 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856450 4984 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856457 4984 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856466 4984 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856473 4984 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856481 4984 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856489 4984 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856497 4984 feature_gate.go:330] unrecognized feature gate: Example Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856507 4984 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856515 4984 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856523 4984 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856530 4984 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856538 4984 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856546 4984 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856553 4984 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856561 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856569 4984 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856576 4984 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856584 4984 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856592 4984 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856600 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856619 4984 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856627 4984 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856634 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856644 4984 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856651 4984 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856659 4984 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856667 4984 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856674 4984 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856683 4984 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856690 4984 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856700 4984 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.856710 4984 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.856732 4984 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.867865 4984 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.867946 4984 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868070 4984 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868082 4984 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868091 4984 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868101 4984 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868109 4984 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868119 4984 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868127 4984 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868135 4984 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868144 4984 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868152 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868160 4984 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868169 4984 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868179 4984 feature_gate.go:330] unrecognized feature gate: Example Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868188 4984 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868197 4984 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868205 4984 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868214 4984 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868223 4984 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868231 4984 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868240 4984 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868272 4984 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868281 4984 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868289 4984 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868297 4984 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868305 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868313 4984 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868322 4984 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868329 4984 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868340 4984 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868352 4984 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868361 4984 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868370 4984 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868378 4984 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868386 4984 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868395 4984 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868403 4984 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868411 4984 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868419 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868428 4984 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868436 4984 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868446 4984 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868456 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868465 4984 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868474 4984 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868483 4984 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868491 4984 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868499 4984 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868508 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868516 4984 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868523 4984 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868531 4984 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868541 4984 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868551 4984 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868558 4984 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868566 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868574 4984 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868584 4984 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868594 4984 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868604 4984 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868613 4984 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868621 4984 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868629 4984 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868636 4984 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868644 4984 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868652 4984 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868660 4984 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868668 4984 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868676 4984 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868685 4984 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868693 4984 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868701 4984 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.868713 4984 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868946 4984 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868961 4984 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868970 4984 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868979 4984 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868988 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.868996 4984 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869005 4984 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869014 4984 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869023 4984 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869034 4984 feature_gate.go:330] unrecognized feature gate: Example Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869042 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869051 4984 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869061 4984 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869072 4984 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869081 4984 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869091 4984 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869100 4984 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869109 4984 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869118 4984 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869127 4984 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869136 4984 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869147 4984 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869156 4984 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869166 4984 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869174 4984 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869183 4984 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869191 4984 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869200 4984 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869208 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869216 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869225 4984 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869233 4984 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869242 4984 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869277 4984 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869286 4984 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869295 4984 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869303 4984 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869311 4984 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869320 4984 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869328 4984 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869336 4984 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869344 4984 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869352 4984 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869360 4984 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869367 4984 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869375 4984 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869383 4984 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869391 4984 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869398 4984 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869407 4984 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869415 4984 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869423 4984 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869430 4984 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869438 4984 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869446 4984 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869454 4984 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869461 4984 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869469 4984 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869477 4984 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869485 4984 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869492 4984 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869500 4984 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869508 4984 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869516 4984 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869524 4984 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869532 4984 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869539 4984 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869547 4984 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869554 4984 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869562 4984 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 10:11:35 crc kubenswrapper[4984]: W0130 10:11:35.869571 4984 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.869583 4984 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.869765 4984 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.875476 4984 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.875596 4984 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.877303 4984 server.go:997] "Starting client certificate rotation" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.877339 4984 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.878535 4984 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-29 15:38:07.347845891 +0000 UTC Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.878637 4984 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.915165 4984 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.917433 4984 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 10:11:35 crc kubenswrapper[4984]: E0130 10:11:35.919375 4984 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.935574 4984 log.go:25] "Validated CRI v1 runtime API" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.971080 4984 log.go:25] "Validated CRI v1 image API" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.973085 4984 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.978986 4984 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-10-07-21-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 10:11:35 crc kubenswrapper[4984]: I0130 10:11:35.979022 4984 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.001151 4984 manager.go:217] Machine: {Timestamp:2026-01-30 10:11:35.99853945 +0000 UTC m=+0.564843294 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:da0bcd04-2174-455a-abae-7839c96298f6 BootID:27e6287f-3fa9-4a7b-9d27-962ff895c3d3 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d4:8d:c6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d4:8d:c6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a9:51:66 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f7:3e:80 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:56:b5:7f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:32:18:12 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b6:80:cb:ee:1e:46 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:56:30:cd:13:23:1b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.001451 4984 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.001615 4984 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.002964 4984 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.003184 4984 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.003220 4984 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.003494 4984 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.003514 4984 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.004125 4984 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.004179 4984 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.004465 4984 state_mem.go:36] "Initialized new in-memory state store" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.004579 4984 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.009733 4984 kubelet.go:418] "Attempting to sync node with API server" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.009765 4984 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.009800 4984 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.009816 4984 kubelet.go:324] "Adding apiserver pod source" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.009835 4984 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.013589 4984 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.014988 4984 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.015705 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.015783 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.015752 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.015866 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.016332 4984 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.017929 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.017957 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.017967 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.017975 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.017988 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.017997 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.018005 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.018019 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.018030 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.018039 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.018051 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.018060 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.019837 4984 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.020293 4984 server.go:1280] "Started kubelet" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.020841 4984 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.021587 4984 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.020949 4984 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 10:11:36 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.022608 4984 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.023018 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.023046 4984 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.023311 4984 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.023334 4984 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.023474 4984 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.024065 4984 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.024506 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:11:53.26447854 +0000 UTC Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.024783 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.024853 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.025200 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.025569 4984 factory.go:55] Registering systemd factory Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.025639 4984 factory.go:221] Registration of the systemd container factory successfully Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.025964 4984 factory.go:153] Registering CRI-O factory Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.025984 4984 factory.go:221] Registration of the crio container factory successfully Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.026352 4984 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.026447 4984 factory.go:103] Registering Raw factory Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.026468 4984 manager.go:1196] Started watching for new ooms in manager Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.027537 4984 server.go:460] "Adding debug handlers to kubelet server" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.026854 4984 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f7a86592da6da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 10:11:36.020264666 +0000 UTC m=+0.586568500,LastTimestamp:2026-01-30 10:11:36.020264666 +0000 UTC m=+0.586568500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.029886 4984 manager.go:319] Starting recovery of all containers Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.037814 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.037877 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.037900 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.037917 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.037935 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.037971 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.037983 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.037995 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038011 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038023 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038054 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038067 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038079 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038100 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038111 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038123 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038135 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038146 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038158 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038170 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038184 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038196 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038209 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038269 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038283 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038295 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.038311 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041085 4984 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041149 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041171 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041188 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041203 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041217 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041230 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041268 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041296 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041311 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041327 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041343 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041360 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041373 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041388 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041430 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041448 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041463 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041483 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041496 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041510 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041523 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041537 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041557 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041572 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041585 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041606 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041656 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041674 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041687 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041701 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041714 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041727 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041738 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041751 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041782 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041798 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041811 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041824 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041837 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041859 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041872 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041889 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.041983 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042028 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042043 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042057 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042071 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042084 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042096 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042115 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042128 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042141 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042156 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042177 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042192 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042206 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042219 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042233 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042265 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042308 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042324 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042366 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042380 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042395 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042415 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042430 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042447 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042487 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042503 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042517 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042532 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042545 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042562 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042579 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042592 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042610 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042624 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042736 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042758 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042825 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042846 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042863 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042880 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042896 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042911 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042927 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042943 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042957 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042971 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042985 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.042999 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043042 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043055 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043067 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043082 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043099 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043111 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043123 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043134 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043172 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043186 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043200 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043213 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043227 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043241 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043277 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043292 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043309 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043323 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043337 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043351 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043363 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043376 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043390 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043402 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043415 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043428 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043440 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043455 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043468 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043481 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043495 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043508 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043523 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043536 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043550 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043563 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043577 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043589 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043603 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043616 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043628 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043643 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043655 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043671 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043684 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043697 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043722 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043736 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043753 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043765 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043778 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043791 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043804 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043816 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043830 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043843 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043856 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043871 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043884 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043897 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043909 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043921 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043933 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043947 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043963 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043975 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.043987 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044001 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044014 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044028 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044040 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044052 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044066 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044080 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044093 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044146 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044162 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044176 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044191 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044204 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044222 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044235 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044581 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044603 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044618 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044631 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044646 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044659 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044674 4984 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044686 4984 reconstruct.go:97] "Volume reconstruction finished" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.044695 4984 reconciler.go:26] "Reconciler: start to sync state" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.049311 4984 manager.go:324] Recovery completed Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.061804 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.063972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.064016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.064029 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.068777 4984 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.068799 4984 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.068819 4984 state_mem.go:36] "Initialized new in-memory state store" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.082332 4984 policy_none.go:49] "None policy: Start" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.083536 4984 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.083570 4984 state_mem.go:35] "Initializing new in-memory state store" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.084806 4984 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.088860 4984 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.088905 4984 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.088934 4984 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.088984 4984 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.090435 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.090517 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.124459 4984 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.152312 4984 manager.go:334] "Starting Device Plugin manager" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.152381 4984 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.152399 4984 server.go:79] "Starting device plugin registration server" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.152849 4984 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.152915 4984 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.153122 4984 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.153311 4984 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.153331 4984 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.161000 4984 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.189833 4984 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.189968 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.191230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.191314 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.191335 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.191590 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.191837 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.191893 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.192807 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.192866 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.192889 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.192978 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.193045 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.193061 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.193178 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.193458 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.193537 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.194610 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.194651 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.194664 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.194828 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.195054 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.195105 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.195678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.195696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.195705 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.195712 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.195768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.195810 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.195844 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.196058 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.196113 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.196535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.196570 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.196582 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.197012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.197051 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.197062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.197144 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.197171 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.197185 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.197292 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.197320 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.198019 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.198056 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.198068 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.225980 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246292 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246338 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246365 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246382 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246403 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246422 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246481 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246514 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246537 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246613 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246674 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.246864 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.247292 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.247373 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.247588 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.253010 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.254326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.254373 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.254382 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.254410 4984 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.254945 4984 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.348947 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349020 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349066 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349094 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349157 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349179 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349200 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349240 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349227 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349300 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349387 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349396 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349358 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349371 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349383 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349283 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349338 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349508 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349491 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349576 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349612 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349639 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349643 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349674 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349680 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349690 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349727 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349731 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.349854 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.455959 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.458160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.458212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.458224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.458268 4984 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.458967 4984 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.515565 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.523162 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.544488 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.562854 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.564903 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-85380a1061a36820e997115b83abbb393faf20133dad3774459d71440d46bad3 WatchSource:0}: Error finding container 85380a1061a36820e997115b83abbb393faf20133dad3774459d71440d46bad3: Status 404 returned error can't find the container with id 85380a1061a36820e997115b83abbb393faf20133dad3774459d71440d46bad3 Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.566757 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ef39f6edd2cbba971d8b339b7139ca703cb39a79ba7ed9a988c3205005f27056 WatchSource:0}: Error finding container ef39f6edd2cbba971d8b339b7139ca703cb39a79ba7ed9a988c3205005f27056: Status 404 returned error can't find the container with id ef39f6edd2cbba971d8b339b7139ca703cb39a79ba7ed9a988c3205005f27056 Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.570962 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.571401 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f690fc37f7fc0ea79ea0de3daee4f473cd86a64bf69b91b5d7151757782ba7d5 WatchSource:0}: Error finding container f690fc37f7fc0ea79ea0de3daee4f473cd86a64bf69b91b5d7151757782ba7d5: Status 404 returned error can't find the container with id f690fc37f7fc0ea79ea0de3daee4f473cd86a64bf69b91b5d7151757782ba7d5 Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.581219 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f4d19e43749858dc69e211938cf699489409fab4972963196f21138c1873ff0d WatchSource:0}: Error finding container f4d19e43749858dc69e211938cf699489409fab4972963196f21138c1873ff0d: Status 404 returned error can't find the container with id f4d19e43749858dc69e211938cf699489409fab4972963196f21138c1873ff0d Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.598760 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1f7f4a5061a2cdbaa58f9cccae9eafdceb4fe2bea01f0c85a18d6f7857d9039a WatchSource:0}: Error finding container 1f7f4a5061a2cdbaa58f9cccae9eafdceb4fe2bea01f0c85a18d6f7857d9039a: Status 404 returned error can't find the container with id 1f7f4a5061a2cdbaa58f9cccae9eafdceb4fe2bea01f0c85a18d6f7857d9039a Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.627323 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.820837 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.820982 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.859747 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.860735 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.860776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.860785 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:36 crc kubenswrapper[4984]: I0130 10:11:36.860806 4984 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.861149 4984 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 30 10:11:36 crc kubenswrapper[4984]: W0130 10:11:36.937962 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:36 crc kubenswrapper[4984]: E0130 10:11:36.938097 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.023351 4984 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.025466 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:04:12.333378562 +0000 UTC Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.095082 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ef39f6edd2cbba971d8b339b7139ca703cb39a79ba7ed9a988c3205005f27056"} Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.096112 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"85380a1061a36820e997115b83abbb393faf20133dad3774459d71440d46bad3"} Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.098419 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1f7f4a5061a2cdbaa58f9cccae9eafdceb4fe2bea01f0c85a18d6f7857d9039a"} Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.099231 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f4d19e43749858dc69e211938cf699489409fab4972963196f21138c1873ff0d"} Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.099737 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f690fc37f7fc0ea79ea0de3daee4f473cd86a64bf69b91b5d7151757782ba7d5"} Jan 30 10:11:37 crc kubenswrapper[4984]: W0130 10:11:37.196944 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:37 crc kubenswrapper[4984]: E0130 10:11:37.197378 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:37 crc kubenswrapper[4984]: E0130 10:11:37.428593 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Jan 30 10:11:37 crc kubenswrapper[4984]: W0130 10:11:37.595556 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:37 crc kubenswrapper[4984]: E0130 10:11:37.595723 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.661501 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.663017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.663081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.663098 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.663150 4984 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 10:11:37 crc kubenswrapper[4984]: E0130 10:11:37.663710 4984 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 30 10:11:37 crc kubenswrapper[4984]: I0130 10:11:37.954357 4984 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 10:11:37 crc kubenswrapper[4984]: E0130 10:11:37.955291 4984 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.023373 4984 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.026596 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:16:07.445565063 +0000 UTC Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.103925 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f"} Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.103950 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.103971 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564"} Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.103988 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1"} Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.104000 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862"} Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.104640 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.104672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.104680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.105615 4984 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0" exitCode=0 Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.105691 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.105697 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0"} Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.106296 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.106316 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.106324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.108018 4984 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493" exitCode=0 Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.108056 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493"} Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.108132 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.108963 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.108984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.108993 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.109325 4984 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44" exitCode=0 Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.109373 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.109384 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44"} Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.110072 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.110087 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.110095 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.111195 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb" exitCode=0 Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.111233 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb"} Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.111402 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.112858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.112905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.112922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.119599 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.123446 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.123489 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.123502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.249803 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:38 crc kubenswrapper[4984]: I0130 10:11:38.532473 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:38 crc kubenswrapper[4984]: W0130 10:11:38.845825 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:38 crc kubenswrapper[4984]: E0130 10:11:38.845950 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.022914 4984 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.027239 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:58:31.325075523 +0000 UTC Jan 30 10:11:39 crc kubenswrapper[4984]: E0130 10:11:39.029430 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.117102 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.117158 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.117174 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.117181 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.118109 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.118145 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.118157 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.120786 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.120810 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.120822 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.120834 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.120847 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.120884 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.121616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.121637 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.121649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.122449 4984 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53" exitCode=0 Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.122515 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.122560 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.123064 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.123088 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.123099 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.123919 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.123994 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.124314 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c475de8d49f5aefa32c82d036020b47bc55061e42d5da99bb1052ef7f0ca0b0b"} Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.124538 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.124555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.124565 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.125498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.125535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.125550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:39 crc kubenswrapper[4984]: W0130 10:11:39.217724 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:39 crc kubenswrapper[4984]: E0130 10:11:39.217826 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.264031 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.265114 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.265155 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.265168 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.265195 4984 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 10:11:39 crc kubenswrapper[4984]: E0130 10:11:39.265720 4984 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.293942 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:11:39 crc kubenswrapper[4984]: W0130 10:11:39.434164 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:39 crc kubenswrapper[4984]: E0130 10:11:39.434239 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:39 crc kubenswrapper[4984]: W0130 10:11:39.522718 4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:11:39 crc kubenswrapper[4984]: E0130 10:11:39.522819 4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 30 10:11:39 crc kubenswrapper[4984]: I0130 10:11:39.680995 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.077099 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:07:42.794916245 +0000 UTC Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.127750 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.129381 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078" exitCode=255 Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.129454 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078"} Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.129501 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.130264 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.130300 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.130311 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.130916 4984 scope.go:117] "RemoveContainer" containerID="ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.133012 4984 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31" exitCode=0 Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.133059 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31"} Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.133128 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.133166 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.133222 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.133134 4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.133427 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134242 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134313 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134337 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134332 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134353 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134588 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:40 crc kubenswrapper[4984]: I0130 10:11:40.134604 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.078229 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:28:55.757356393 +0000 UTC Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.137168 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.139759 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5"} Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.139948 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.141061 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.141105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.141123 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.145147 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4"} Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.145190 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15"} Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.145212 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3"} Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.145230 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992"} Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.145331 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.146887 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.146944 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.146963 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.376914 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.377038 4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.377071 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.378069 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.378133 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:41 crc kubenswrapper[4984]: I0130 10:11:41.378156 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.014716 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.043019 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.060161 4984 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.079315 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:28:54.203662056 +0000 UTC Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.151968 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.152016 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.151959 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807"} Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.152157 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.153232 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.153290 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.153303 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.153376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.153436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.153457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.184568 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.184739 4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.184790 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.186123 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.186176 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.186198 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.193524 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.466510 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.468309 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.468355 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.468369 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.468400 4984 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 10:11:42 crc kubenswrapper[4984]: I0130 10:11:42.805756 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.080421 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:39:52.508467439 +0000 UTC Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.154200 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.154334 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.154369 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.155486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.155546 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.155569 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.156241 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.156324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.156338 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.156357 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.156358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:43 crc kubenswrapper[4984]: I0130 10:11:43.156377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:44 crc kubenswrapper[4984]: I0130 10:11:44.080649 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:09:07.743252712 +0000 UTC Jan 30 10:11:44 crc kubenswrapper[4984]: I0130 10:11:44.159077 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:44 crc kubenswrapper[4984]: I0130 10:11:44.160087 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:44 crc kubenswrapper[4984]: I0130 10:11:44.160144 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:44 crc kubenswrapper[4984]: I0130 10:11:44.160162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:45 crc kubenswrapper[4984]: I0130 10:11:45.081592 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:33:54.292900811 +0000 UTC Jan 30 10:11:45 crc kubenswrapper[4984]: I0130 10:11:45.184952 4984 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 10:11:45 crc kubenswrapper[4984]: I0130 10:11:45.185075 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 10:11:46 crc kubenswrapper[4984]: I0130 10:11:46.082061 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:43:19.220440045 +0000 UTC Jan 30 10:11:46 crc kubenswrapper[4984]: E0130 10:11:46.161215 4984 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 10:11:46 crc kubenswrapper[4984]: I0130 10:11:46.841775 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 10:11:46 crc kubenswrapper[4984]: I0130 10:11:46.842113 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:46 crc kubenswrapper[4984]: I0130 10:11:46.843752 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:46 crc kubenswrapper[4984]: I0130 10:11:46.843791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:46 crc kubenswrapper[4984]: I0130 10:11:46.843804 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:47 crc kubenswrapper[4984]: I0130 10:11:47.083135 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:56:52.207603779 +0000 UTC Jan 30 10:11:48 crc kubenswrapper[4984]: I0130 10:11:48.084289 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:05:27.342459663 +0000 UTC Jan 30 10:11:49 crc kubenswrapper[4984]: I0130 10:11:49.085790 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:32:24.097311781 +0000 UTC Jan 30 10:11:50 crc kubenswrapper[4984]: I0130 10:11:50.023305 4984 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 10:11:50 crc kubenswrapper[4984]: I0130 10:11:50.086687 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:35:02.454167661 +0000 UTC Jan 30 10:11:51 crc kubenswrapper[4984]: I0130 10:11:51.087048 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:53:21.201113306 +0000 UTC Jan 30 10:11:51 crc kubenswrapper[4984]: I0130 10:11:51.115584 4984 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 10:11:51 crc kubenswrapper[4984]: I0130 10:11:51.115643 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 10:11:51 crc kubenswrapper[4984]: I0130 10:11:51.125579 4984 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 10:11:51 crc kubenswrapper[4984]: I0130 10:11:51.125637 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 10:11:52 crc kubenswrapper[4984]: I0130 10:11:52.021657 4984 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]log ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]etcd ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/priority-and-fairness-filter ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-apiextensions-informers ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-apiextensions-controllers ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/crd-informer-synced ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-system-namespaces-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 30 10:11:52 crc kubenswrapper[4984]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 30 10:11:52 crc kubenswrapper[4984]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/bootstrap-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/start-kube-aggregator-informers ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/apiservice-registration-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/apiservice-discovery-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]autoregister-completion ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/apiservice-openapi-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 30 10:11:52 crc kubenswrapper[4984]: livez check failed Jan 30 10:11:52 crc kubenswrapper[4984]: I0130 10:11:52.021743 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:11:52 crc kubenswrapper[4984]: I0130 10:11:52.088077 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:20:47.158003624 +0000 UTC Jan 30 10:11:52 crc kubenswrapper[4984]: I0130 10:11:52.814062 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:11:52 crc kubenswrapper[4984]: I0130 10:11:52.814319 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:11:52 crc kubenswrapper[4984]: I0130 10:11:52.815725 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:11:52 crc kubenswrapper[4984]: I0130 10:11:52.815788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:11:52 crc kubenswrapper[4984]: I0130 10:11:52.815811 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:11:53 crc kubenswrapper[4984]: I0130 10:11:53.089212 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:33:22.975197543 +0000 UTC Jan 30 10:11:54 crc kubenswrapper[4984]: I0130 10:11:54.089851 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:16:35.02553296 +0000 UTC Jan 30 10:11:55 crc kubenswrapper[4984]: I0130 10:11:55.090651 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:22:22.425357047 +0000 UTC Jan 30 10:11:55 crc kubenswrapper[4984]: I0130 10:11:55.185285 4984 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 10:11:55 crc kubenswrapper[4984]: I0130 10:11:55.185342 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.091311 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:16:44.284389607 +0000 UTC Jan 30 10:11:56 crc kubenswrapper[4984]: E0130 10:11:56.116054 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.120343 4984 trace.go:236] Trace[77628630]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 10:11:42.178) (total time: 13941ms): Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[77628630]: ---"Objects listed" error: 13941ms (10:11:56.120) Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[77628630]: [13.941864192s] [13.941864192s] END Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.120377 4984 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 10:11:56 crc kubenswrapper[4984]: E0130 10:11:56.120880 4984 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121015 4984 trace.go:236] Trace[623288102]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 10:11:44.539) (total time: 11581ms): Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[623288102]: ---"Objects listed" error: 11581ms (10:11:56.120) Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[623288102]: [11.581915432s] [11.581915432s] END Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121050 4984 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121448 4984 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121467 4984 trace.go:236] Trace[1129813090]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 10:11:43.183) (total time: 12937ms): Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[1129813090]: ---"Objects listed" error: 12937ms (10:11:56.121) Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[1129813090]: [12.937939174s] [12.937939174s] END Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121492 4984 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.122708 4984 trace.go:236] Trace[254242093]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 10:11:44.222) (total time: 11900ms): Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[254242093]: ---"Objects listed" error: 11900ms (10:11:56.122) Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[254242093]: [11.900513676s] [11.900513676s] END Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.122735 4984 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.132112 4984 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.158660 4984 csr.go:261] certificate signing request csr-s96ss is approved, waiting to be issued Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.173491 4984 csr.go:257] certificate signing request csr-s96ss is issued Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.883001 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.901986 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.020428 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.020988 4984 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.021061 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.023302 4984 apiserver.go:52] "Watching apiserver" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.026400 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.029616 4984 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030123 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-xrm2v","openshift-dns/node-resolver-6tdgl","openshift-machine-config-operator/machine-config-daemon-m4gnh","openshift-multus/multus-bnkpj","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-node-identity/network-node-identity-vrzqb","openshift-etcd/etcd-crc","openshift-multus/multus-additional-cni-plugins-5vcbf","openshift-multus/network-metrics-daemon-sdmkd","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030482 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030656 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030892 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030966 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.031020 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.031036 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031074 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031417 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031453 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.031535 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031694 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031776 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031829 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.031848 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031868 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.032993 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.033361 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.033596 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.034670 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.034956 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.035640 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.035725 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.035657 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.036785 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.036975 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.037044 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.037175 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.037288 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.037617 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.038809 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.039274 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.039321 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.040169 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.040337 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.041318 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.046387 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.047704 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.047730 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.047829 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048081 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048100 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048195 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048226 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048404 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048422 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048710 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048816 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.049299 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.062221 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.073722 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.082539 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.091434 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:14:44.290899806 +0000 UTC Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.092422 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.101503 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.117380 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.124474 4984 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.126945 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.126971 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.126992 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127008 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127026 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127043 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127062 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127079 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127094 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127110 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127148 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127164 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127179 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127230 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127266 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127280 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127295 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127310 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127339 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127357 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127372 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127387 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127403 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127419 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127438 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127452 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127469 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127458 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127484 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127652 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127692 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127717 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127758 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127755 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127803 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127806 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127871 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127904 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127929 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127955 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127983 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127992 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128000 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128015 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128039 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128065 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128090 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128090 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128116 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128143 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128169 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128189 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128184 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128196 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128285 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128318 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128375 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128426 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128475 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128561 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128597 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128646 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128681 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128716 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128750 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128770 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128784 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128816 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128849 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128884 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128916 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128948 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128985 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129018 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129050 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129084 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129118 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129153 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129226 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129045 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129080 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.129272 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.629224435 +0000 UTC m=+22.195528299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132313 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132362 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132383 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132401 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132420 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132437 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132453 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132469 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132484 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132507 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132533 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132552 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132569 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132585 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132603 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132619 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132635 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132656 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132672 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132695 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132718 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132740 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132761 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132785 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132807 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132854 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132877 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132899 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132921 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132945 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132966 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132981 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132988 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133054 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133077 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133101 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133123 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133145 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133161 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133178 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133231 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133398 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133419 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133663 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133745 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133984 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129293 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129298 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129491 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129530 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134137 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129648 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129655 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129757 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129764 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134176 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129849 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129988 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130305 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130474 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130508 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130583 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130599 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130616 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130654 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130820 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130871 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130916 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131044 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131055 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131066 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131388 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131391 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134397 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133197 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134506 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134553 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134591 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134629 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134669 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134704 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134739 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134770 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134802 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134835 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134868 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134899 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136377 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136435 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136472 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137348 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137405 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137457 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137508 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137560 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134591 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134666 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134813 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134847 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134977 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135017 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135241 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135581 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135608 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135962 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136328 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136697 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137056 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137186 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137466 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136922 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.138053 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.138279 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.138922 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.138935 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139061 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139414 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139447 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139463 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139742 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139740 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139891 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140133 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140186 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140374 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140435 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140514 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137613 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140898 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140991 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141091 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141202 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141315 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141591 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141698 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141794 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141898 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142043 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142139 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142214 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142302 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142382 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142454 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142523 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142597 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142678 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142771 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142865 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142863 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142943 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143079 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143100 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143122 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143142 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143158 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143174 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143190 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143214 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143230 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143260 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143276 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143292 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143310 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143329 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143356 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143374 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143390 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143420 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143438 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143454 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143470 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143488 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143505 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143527 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143544 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143559 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143574 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143592 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143608 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143624 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143675 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143691 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143707 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143723 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143739 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143756 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145551 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145583 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145601 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145621 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145637 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145653 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145670 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145727 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145752 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8r6\" (UniqueName: \"kubernetes.io/projected/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-kube-api-access-xp8r6\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145773 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145798 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145816 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-cni-binary-copy\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145833 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145848 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145864 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-daemon-config\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145881 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145897 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145914 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-system-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145930 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-os-release\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbclc\" (UniqueName: \"kubernetes.io/projected/0c5bace6-b520-4c9e-be10-a66fea4f9130-kube-api-access-gbclc\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145989 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146004 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146026 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146044 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146061 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwz9\" (UniqueName: \"kubernetes.io/projected/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-kube-api-access-qhwz9\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146076 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146095 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146109 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-cnibin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146124 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-multus-certs\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146138 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140973 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141051 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141070 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141108 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141811 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142010 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142639 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149113 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149156 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-k8s-cni-cncf-io\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149187 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149206 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-bin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149222 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-multus\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149247 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149366 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqknm\" (UniqueName: \"kubernetes.io/projected/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-kube-api-access-mqknm\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149383 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-os-release\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149401 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149419 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149436 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c1bd910-b683-42bf-966f-51a04ac18bd2-mcd-auth-proxy-config\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149451 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149476 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149495 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149493 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149535 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cnibin\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149574 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149590 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149394 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.150334 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.150978 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.150971 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.151552 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.151731 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.152353 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.152905 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.152924 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153101 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153164 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153528 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153522 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156151 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.155786 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153994 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.154092 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.154316 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.155137 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.155414 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156651 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149607 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156693 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156767 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-conf-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156903 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.155776 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156034 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156300 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156496 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.157387 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.157654 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.157890 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.157933 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159032 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-etc-kubernetes\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159164 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159277 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159596 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159801 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160181 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160202 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160185 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160279 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160304 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160336 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160360 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-hosts-file\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160379 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-system-cni-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160399 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-binary-copy\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160420 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160440 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160460 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6c1bd910-b683-42bf-966f-51a04ac18bd2-rootfs\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160479 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c1bd910-b683-42bf-966f-51a04ac18bd2-proxy-tls\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160498 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160519 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160539 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160559 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160575 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmq8s\" (UniqueName: \"kubernetes.io/projected/6c1bd910-b683-42bf-966f-51a04ac18bd2-kube-api-access-rmq8s\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160594 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-socket-dir-parent\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160616 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-netns\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160633 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-kubelet\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160661 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-hostroot\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160678 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160806 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160850 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.161033 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.161091 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.161376 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.161699 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.661676941 +0000 UTC m=+22.227980965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.161922 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.162015 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.662004288 +0000 UTC m=+22.228308112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.162554 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.162918 4984 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163233 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163551 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163648 4984 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163733 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163776 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163792 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163807 4984 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163821 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163834 4984 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163846 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163858 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163870 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163883 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163893 4984 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163905 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163918 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163930 4984 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163941 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163953 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163965 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163976 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163989 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164000 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164012 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164023 4984 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164036 4984 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164048 4984 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164060 4984 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164073 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164083 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164095 4984 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164106 4984 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164117 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164128 4984 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164139 4984 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164149 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164159 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164168 4984 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164177 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164186 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164195 4984 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164450 4984 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164471 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164870 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.165863 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.165931 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.172842 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.174409 4984 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 10:06:56 +0000 UTC, rotation deadline is 2026-10-29 04:10:36.873082417 +0000 UTC Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.174760 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175129 4984 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.175151 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175169 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175186 4984 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175202 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.168715 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.172368 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175219 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175323 4984 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175339 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175354 4984 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175368 4984 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.175171 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175382 4984 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175397 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175410 4984 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175424 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.173087 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.175443 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.675425609 +0000 UTC m=+22.241729433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175099 4984 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6521h58m39.697999466s for next certificate rotation Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175468 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175487 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175497 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175507 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175517 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175528 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175538 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175549 4984 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175558 4984 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175566 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175576 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175586 4984 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175595 4984 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175604 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175613 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175623 4984 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175631 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175641 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175650 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175659 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175672 4984 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175684 4984 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175696 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175707 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175716 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175724 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175735 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175745 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175756 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175769 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175781 4984 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175794 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175808 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175822 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175831 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175840 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175850 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175859 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175870 4984 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175882 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175894 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175906 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175919 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175931 4984 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175943 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175955 4984 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175966 4984 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175978 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175993 4984 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176005 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176019 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176031 4984 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176044 4984 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176056 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176069 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176081 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176092 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176104 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176116 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176128 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176140 4984 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176152 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176163 4984 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176174 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176186 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176198 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176209 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.175837 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176222 4984 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176238 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.176265 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.176285 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.176345 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.676328639 +0000 UTC m=+22.242632673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176269 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.177120 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.183468 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.183899 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.184214 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.184552 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.184991 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.185110 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.185706 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.185890 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.185931 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186037 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186660 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186117 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186679 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186672 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.187021 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188686 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188686 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188719 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188829 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188849 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188980 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.193216 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.193415 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.193710 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.193866 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.194188 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.195206 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.196764 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.196810 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.196827 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197032 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197050 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197216 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197270 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197753 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.199935 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.200321 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.200630 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.200968 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201000 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201159 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201162 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201447 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201625 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201681 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202232 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202305 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202434 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202535 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202651 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202989 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203132 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203198 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203456 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203556 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203796 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203854 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.204132 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.204736 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.204748 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.205435 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.205574 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.206112 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.207648 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" exitCode=255 Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.207782 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5"} Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.207902 4984 scope.go:117] "RemoveContainer" containerID="ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.208037 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.208736 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.208954 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.210995 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.215043 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"message\\\":\\\"W0130 10:11:39.159607 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 10:11:39.159917 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769767899 cert, and key in /tmp/serving-cert-457381567/serving-signer.crt, /tmp/serving-cert-457381567/serving-signer.key\\\\nI0130 10:11:39.322438 1 observer_polling.go:159] Starting file observer\\\\nW0130 10:11:39.325527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 10:11:39.325676 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:39.326402 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-457381567/tls.crt::/tmp/serving-cert-457381567/tls.key\\\\\\\"\\\\nF0130 10:11:39.604385 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.226632 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.226865 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.228202 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.233859 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.244127 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.256704 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.269061 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277192 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277234 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277283 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cnibin\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277305 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277326 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277346 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277365 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-conf-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277387 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-etc-kubernetes\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277401 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277396 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cnibin\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277417 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-binary-copy\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277491 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277516 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277540 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-hosts-file\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277559 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-system-cni-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277580 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277600 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277625 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6c1bd910-b683-42bf-966f-51a04ac18bd2-rootfs\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277643 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c1bd910-b683-42bf-966f-51a04ac18bd2-proxy-tls\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277676 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277699 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277734 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277755 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277775 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmq8s\" (UniqueName: \"kubernetes.io/projected/6c1bd910-b683-42bf-966f-51a04ac18bd2-kube-api-access-rmq8s\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277799 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-socket-dir-parent\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277827 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-netns\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277848 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-kubelet\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277867 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-hostroot\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277877 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277889 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277347 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277919 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277323 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277951 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278016 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-etc-kubernetes\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278031 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-hosts-file\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278050 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-binary-copy\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278062 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-conf-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277934 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp8r6\" (UniqueName: \"kubernetes.io/projected/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-kube-api-access-xp8r6\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278108 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-system-cni-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278107 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278126 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6c1bd910-b683-42bf-966f-51a04ac18bd2-rootfs\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278128 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-kubelet\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278147 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-netns\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278149 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-hostroot\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278170 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278176 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278197 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278223 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-cni-binary-copy\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278242 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278248 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278283 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-daemon-config\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.278278 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278345 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.278464 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.778408572 +0000 UTC m=+22.344712416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278603 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278680 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-socket-dir-parent\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278682 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278705 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278613 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278775 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-system-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278814 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-os-release\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278819 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278837 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbclc\" (UniqueName: \"kubernetes.io/projected/0c5bace6-b520-4c9e-be10-a66fea4f9130-kube-api-access-gbclc\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278861 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278875 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-system-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278880 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-os-release\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278886 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278915 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278936 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhwz9\" (UniqueName: \"kubernetes.io/projected/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-kube-api-access-qhwz9\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278961 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278983 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279005 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-cnibin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279030 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-multus-certs\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279054 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279093 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279124 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-multus-certs\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279153 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-cnibin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279164 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279055 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279447 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-daemon-config\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279464 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-cni-binary-copy\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279573 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279838 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279899 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-k8s-cni-cncf-io\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279944 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-bin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279965 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-multus\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqknm\" (UniqueName: \"kubernetes.io/projected/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-kube-api-access-mqknm\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280012 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c1bd910-b683-42bf-966f-51a04ac18bd2-mcd-auth-proxy-config\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280032 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-os-release\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280055 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280075 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280155 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280170 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280183 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280197 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280209 4984 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280222 4984 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280235 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280275 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280290 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280302 4984 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280315 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280328 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280340 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280353 4984 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280364 4984 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280377 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280391 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280403 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280417 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280431 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280446 4984 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280476 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280493 4984 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280510 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280527 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280544 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280556 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280616 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280617 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-multus\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280662 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-bin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280664 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-k8s-cni-cncf-io\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280841 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280993 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-os-release\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281278 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281485 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281677 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281719 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c1bd910-b683-42bf-966f-51a04ac18bd2-proxy-tls\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281867 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c1bd910-b683-42bf-966f-51a04ac18bd2-mcd-auth-proxy-config\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282061 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282082 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282097 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282135 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282864 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282881 4984 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282893 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282907 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282919 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282931 4984 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282975 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282988 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283000 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283033 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283048 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283061 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283102 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283115 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283127 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283139 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283151 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283165 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283184 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283197 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283217 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283228 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283240 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283267 4984 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283279 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283291 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283304 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283316 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283328 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283339 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283351 4984 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283363 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283376 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283388 4984 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283401 4984 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283414 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283426 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283438 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.309902 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhwz9\" (UniqueName: \"kubernetes.io/projected/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-kube-api-access-qhwz9\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.310127 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.311281 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp8r6\" (UniqueName: \"kubernetes.io/projected/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-kube-api-access-xp8r6\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.314030 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.314885 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqknm\" (UniqueName: \"kubernetes.io/projected/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-kube-api-access-mqknm\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.315357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmq8s\" (UniqueName: \"kubernetes.io/projected/6c1bd910-b683-42bf-966f-51a04ac18bd2-kube-api-access-rmq8s\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.323030 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbclc\" (UniqueName: \"kubernetes.io/projected/0c5bace6-b520-4c9e-be10-a66fea4f9130-kube-api-access-gbclc\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.328563 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.346071 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.351581 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.361482 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.367470 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.374516 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.374915 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.384622 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.390187 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.393191 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.395883 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.403342 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.403924 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.415041 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.426218 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.437390 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"message\\\":\\\"W0130 10:11:39.159607 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 10:11:39.159917 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769767899 cert, and key in /tmp/serving-cert-457381567/serving-signer.crt, /tmp/serving-cert-457381567/serving-signer.key\\\\nI0130 10:11:39.322438 1 observer_polling.go:159] Starting file observer\\\\nW0130 10:11:39.325527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 10:11:39.325676 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:39.326402 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-457381567/tls.crt::/tmp/serving-cert-457381567/tls.key\\\\\\\"\\\\nF0130 10:11:39.604385 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.446618 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.466217 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: W0130 10:11:57.475692 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-acf4743f834d35a85f409c817390187f0e64fdcdc199929f0fafa5eaca58dad3 WatchSource:0}: Error finding container acf4743f834d35a85f409c817390187f0e64fdcdc199929f0fafa5eaca58dad3: Status 404 returned error can't find the container with id acf4743f834d35a85f409c817390187f0e64fdcdc199929f0fafa5eaca58dad3 Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.476139 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.503205 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692377 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692520 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.692499174 +0000 UTC m=+23.258802998 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692566 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692605 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692712 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692835 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692849 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692859 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692888 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.692880812 +0000 UTC m=+23.259184636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693155 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693188 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.693181599 +0000 UTC m=+23.259485423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693230 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693268 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.693244591 +0000 UTC m=+23.259548415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693309 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693319 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693328 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693348 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.693342493 +0000 UTC m=+23.259646317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.793105 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.793276 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.793315 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.793301649 +0000 UTC m=+23.359605473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.091845 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:55:14.039278981 +0000 UTC Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.093695 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.094698 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.095927 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.097028 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.098073 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.098622 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.099219 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.100204 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.100864 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.101860 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.104218 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.105629 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.106227 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.106887 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.107980 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.108585 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.110409 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.110919 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.111545 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.112709 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.113272 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.114501 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.115102 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.116594 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.117418 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.118394 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.119610 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.120328 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.121197 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.121904 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.124767 4984 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.124898 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.126761 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.127929 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.128482 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.130288 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.131186 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.132284 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.132985 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.134385 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.134939 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.135968 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.136993 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.137657 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.138150 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.139126 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.140043 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.140780 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.141249 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.142130 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.142759 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.143674 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.144216 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.144687 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.215746 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.216170 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b8af47c59e6471c72a764fc1bb679b40a49a6f1be1b7952c88cac70cfff472b8"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.218369 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.218399 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.218411 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"35176f1db40bff30d0a1bdf1cd2412184637b463c4c1804bec5920259c1dd128"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.226228 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.226313 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"591abe0e3314df0de6d6cfd6cbf735143e6b6adce321fcc920f5a2e00918538a"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.228688 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"acf4743f834d35a85f409c817390187f0e64fdcdc199929f0fafa5eaca58dad3"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.234282 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.234365 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"36892c45faeadfd18f700d0f05aa81c0a865b412a10a9b7263b4cedcfd049de7"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.234852 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"message\\\":\\\"W0130 10:11:39.159607 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 10:11:39.159917 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769767899 cert, and key in /tmp/serving-cert-457381567/serving-signer.crt, /tmp/serving-cert-457381567/serving-signer.key\\\\nI0130 10:11:39.322438 1 observer_polling.go:159] Starting file observer\\\\nW0130 10:11:39.325527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 10:11:39.325676 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:39.326402 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-457381567/tls.crt::/tmp/serving-cert-457381567/tls.key\\\\\\\"\\\\nF0130 10:11:39.604385 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.236451 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6tdgl" event={"ID":"5a9a337d-bc6b-4a98-8abc-7569fa4fa312","Type":"ContainerStarted","Data":"d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.236488 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6tdgl" event={"ID":"5a9a337d-bc6b-4a98-8abc-7569fa4fa312","Type":"ContainerStarted","Data":"4603b4dde61408a7dfeb79ea5137d1d31be5f5f0d3e704bb3e4679acf2a7cf15"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.238490 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" exitCode=0 Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.238627 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.238713 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"9a5c5f0c87eb230fd06c2a946e269e2d2a3860384327e26e9cd419f72e754050"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.242043 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.244424 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.244708 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.245864 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.245918 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.245932 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"931619a47687c757bb2f44f8c147193f2613873a6f45d51204f84236421e0391"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.249295 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.255764 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.263692 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.273316 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.282561 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.292828 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.309946 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.333218 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.377663 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.400504 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.413044 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.423427 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.432465 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.443880 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.455443 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.471612 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.481498 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.489530 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.497883 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.507699 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.523037 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.541608 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.553405 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.561122 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.568335 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.579551 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.591506 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702443 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702561 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702607 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.702581205 +0000 UTC m=+25.268885099 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702641 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702654 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702695 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.702678878 +0000 UTC m=+25.268982802 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702733 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702797 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702837 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702921 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.702901583 +0000 UTC m=+25.269205467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702921 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702940 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702964 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702968 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702979 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702988 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.703024 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.703013655 +0000 UTC m=+25.269317569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.703052 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.703032615 +0000 UTC m=+25.269336499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.804114 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.804341 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.804459 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.804432333 +0000 UTC m=+25.370736158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.022511 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091657 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091873 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.091968 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091656 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091977 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:32:44.56824892 +0000 UTC Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091689 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.092126 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.092181 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.092337 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.251412 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c" exitCode=0 Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.251509 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255385 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255449 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255465 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255477 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255496 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255507 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255864 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.256023 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.277106 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.302086 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.340870 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.358043 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.393843 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.411846 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.435206 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.454454 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.464180 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.476867 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.494298 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.505950 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.523355 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.544287 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.790899 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-l5dvh"] Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.791659 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.795134 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.795414 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.796158 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.796613 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.808917 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.825145 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.840325 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.859918 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.885365 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.909154 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.914660 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7x4j\" (UniqueName: \"kubernetes.io/projected/a73d7427-d84d-469a-8a34-e32bcd26e1e7-kube-api-access-g7x4j\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.914735 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a73d7427-d84d-469a-8a34-e32bcd26e1e7-host\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.914778 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a73d7427-d84d-469a-8a34-e32bcd26e1e7-serviceca\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.927182 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.936605 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.946509 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.955041 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.972708 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.985137 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.996037 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.011239 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.015962 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a73d7427-d84d-469a-8a34-e32bcd26e1e7-host\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.016033 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a73d7427-d84d-469a-8a34-e32bcd26e1e7-serviceca\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.016068 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7x4j\" (UniqueName: \"kubernetes.io/projected/a73d7427-d84d-469a-8a34-e32bcd26e1e7-kube-api-access-g7x4j\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.016064 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a73d7427-d84d-469a-8a34-e32bcd26e1e7-host\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.017211 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a73d7427-d84d-469a-8a34-e32bcd26e1e7-serviceca\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.028035 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.035952 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7x4j\" (UniqueName: \"kubernetes.io/projected/a73d7427-d84d-469a-8a34-e32bcd26e1e7-kube-api-access-g7x4j\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.092477 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 04:31:54.589871159 +0000 UTC Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.106465 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.259991 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980"} Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.262798 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l5dvh" event={"ID":"a73d7427-d84d-469a-8a34-e32bcd26e1e7","Type":"ContainerStarted","Data":"0451172309e5191c483641fb9d175a3fcfa50a22b1c3991d762503b2c3e42d0a"} Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.278175 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.294747 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.305845 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.315888 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.329108 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.341289 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.355156 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.368500 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.378990 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.390424 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.404335 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.415582 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.427151 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.440583 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.451014 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.721949 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.722060 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.722098 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.722121 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.722153 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722318 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722366 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722380 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722320 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.722280353 +0000 UTC m=+29.288584207 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722463 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722515 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722341 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722536 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722475 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.722454007 +0000 UTC m=+29.288757841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722408 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722643 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.72261634 +0000 UTC m=+29.288920204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722673 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.722658351 +0000 UTC m=+29.288962215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722704 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.722692462 +0000 UTC m=+29.288996316 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.823439 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.823549 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.823616 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.823599129 +0000 UTC m=+29.389902953 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.089561 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.089569 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:01 crc kubenswrapper[4984]: E0130 10:12:01.089687 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.089584 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.089565 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:01 crc kubenswrapper[4984]: E0130 10:12:01.089750 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:01 crc kubenswrapper[4984]: E0130 10:12:01.089832 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:01 crc kubenswrapper[4984]: E0130 10:12:01.089926 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.092849 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:15:23.754085304 +0000 UTC Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.274509 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc"} Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.279407 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980" exitCode=0 Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.280222 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980"} Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.282169 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l5dvh" event={"ID":"a73d7427-d84d-469a-8a34-e32bcd26e1e7","Type":"ContainerStarted","Data":"1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad"} Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.296194 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.323641 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.344757 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.362198 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.380092 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.396480 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.415503 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.433025 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.446133 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.459846 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.479679 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.506380 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.529614 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.543431 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.556108 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.567873 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.581162 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.593084 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.603806 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.624529 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.643943 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.657062 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.668810 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.678023 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.692839 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.702628 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.715704 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.729053 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.740432 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.754220 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.092917 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:18:11.003142963 +0000 UTC Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.192675 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.204562 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.216613 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.219584 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.247073 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.264881 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.277077 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.287820 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68" exitCode=0 Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.287910 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.295008 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.297404 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.302521 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.321680 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.343634 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.367318 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.384019 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.398706 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.425044 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.451831 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.472902 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.489551 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.507495 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.521936 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.527922 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.528434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.528485 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.528502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.528626 4984 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.537324 4984 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.537550 4984 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538731 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538741 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538767 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.547837 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.557905 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561388 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561405 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561447 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.563169 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.573059 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577288 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577332 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577351 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.581877 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.596388 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.597078 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600684 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600744 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600776 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.613659 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.617340 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625576 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625627 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625682 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.633436 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.643234 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.643400 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647176 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647216 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647275 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647290 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.654302 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.679807 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.696815 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.708400 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.734150 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.746054 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.749732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.749908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.750059 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.750183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.750340 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.762115 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.782344 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.797505 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853760 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.957081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.957622 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.957873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.958060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.958187 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061692 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061772 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.089713 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.089806 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.089727 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:03 crc kubenswrapper[4984]: E0130 10:12:03.089944 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.089729 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:03 crc kubenswrapper[4984]: E0130 10:12:03.090123 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:03 crc kubenswrapper[4984]: E0130 10:12:03.090317 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:03 crc kubenswrapper[4984]: E0130 10:12:03.090466 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.093106 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:32:09.272993888 +0000 UTC Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164754 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164870 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164892 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268718 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268754 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268786 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.301751 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d" exitCode=0 Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.301838 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.335576 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.356557 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370879 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370889 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370904 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370914 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.374169 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.392034 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.411283 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.457186 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473274 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473285 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473301 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473310 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.508859 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.523525 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.535710 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.546567 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.556168 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.571178 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574986 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.582846 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.591944 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.602455 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.617441 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677440 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677513 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780315 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780371 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780387 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780411 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780429 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.884662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.885070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.885098 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.885126 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.885149 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987633 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987676 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987698 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089909 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089921 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.093366 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:39:45.948343904 +0000 UTC Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192658 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192686 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294764 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294798 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.307699 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.312056 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.322674 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.338808 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.355887 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.380574 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398861 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398951 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.414958 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.435962 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.454896 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.470855 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.484374 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501357 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501715 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501812 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.514657 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.525144 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.541312 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.558363 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.574891 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.595539 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604614 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604729 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604772 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.620276 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.647074 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.685422 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.701093 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706863 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706915 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706931 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706942 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.722908 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.740458 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.755133 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.767848 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.767997 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768034 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.76801428 +0000 UTC m=+37.334318104 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.768063 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768096 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.768107 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.768151 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768171 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.768149323 +0000 UTC m=+37.334453187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768278 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768296 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768311 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768341 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.768331917 +0000 UTC m=+37.334635741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768393 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768421 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.768412749 +0000 UTC m=+37.334716573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768471 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768484 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768495 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768521 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.768512861 +0000 UTC m=+37.334816685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.776601 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.794932 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809176 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809233 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809262 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809282 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809295 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.810645 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.824142 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.841704 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.854719 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.865915 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.868572 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.868803 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.868968 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.868937597 +0000 UTC m=+37.435241441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.878165 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.888934 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911728 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911742 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911753 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.014932 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.014988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.015010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.015033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.015048 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.089852 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.089974 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:05 crc kubenswrapper[4984]: E0130 10:12:05.090041 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.089859 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:05 crc kubenswrapper[4984]: E0130 10:12:05.090097 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:05 crc kubenswrapper[4984]: E0130 10:12:05.090177 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.089863 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:05 crc kubenswrapper[4984]: E0130 10:12:05.090259 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.094099 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:08:03.374743802 +0000 UTC Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117178 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220325 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220373 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220415 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.318418 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167" exitCode=0 Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.318487 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.319471 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.319508 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.319530 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324460 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324528 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324541 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.345732 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.350678 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.353730 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.360987 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.374916 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.391928 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.403121 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.417475 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427036 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427092 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427102 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427750 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.440588 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.450630 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.460551 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.472547 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.484229 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.493716 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.504028 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529817 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529835 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529859 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529878 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.530362 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.551465 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.561272 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.573776 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.588296 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.600476 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.610018 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.630129 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632355 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632403 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632429 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.644675 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.660336 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.672174 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.681696 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.693473 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.709589 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.725747 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.734632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.734810 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.734895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.734971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.735026 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.736769 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.748577 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.789438 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.836474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.836696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.836922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.837034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.837122 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.877899 4984 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939330 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939348 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939375 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939394 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042702 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042779 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042824 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042842 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.096195 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:00:34.730837783 +0000 UTC Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.116609 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.142936 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.145727 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.145795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.145814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.146353 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.146415 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.158129 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.177304 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.237134 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.248894 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.249540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.249562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.250138 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.250415 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.251699 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.265666 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.279026 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.290835 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.301766 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.317983 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.327030 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1" exitCode=0 Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.327109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.330742 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.345653 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353685 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353698 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353707 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.365994 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.394665 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.435866 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456127 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456141 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456150 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.471589 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.511046 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558186 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558194 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558217 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.561107 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.589984 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.630430 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660157 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660216 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660238 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.671371 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.727364 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.761706 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763181 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763274 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763319 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763337 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.792085 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.831278 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866268 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866280 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866296 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866307 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.876461 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.910573 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.953380 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968414 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968442 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.995505 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.035828 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071018 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071232 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071333 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071475 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.075448 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.089997 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:07 crc kubenswrapper[4984]: E0130 10:12:07.090131 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.090556 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:07 crc kubenswrapper[4984]: E0130 10:12:07.090632 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.090700 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:07 crc kubenswrapper[4984]: E0130 10:12:07.090774 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.090828 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:07 crc kubenswrapper[4984]: E0130 10:12:07.090896 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.096872 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:41:15.827175223 +0000 UTC Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173599 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173651 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173660 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173682 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276345 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276355 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.331948 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/0.log" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.336476 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f" exitCode=1 Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.336613 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.337223 4984 scope.go:117] "RemoveContainer" containerID="c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.341567 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.355101 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.370883 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.379933 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.379981 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.379994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.380012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.380025 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.387389 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.402471 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.415622 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.465201 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.481627 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482806 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482860 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482877 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482915 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.499309 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.517394 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.529958 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.543561 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.560356 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586332 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586363 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586376 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.601368 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.649330 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.673398 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.689868 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.689941 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.689963 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.689988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.690007 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.712537 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.753176 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793511 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.796692 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.832173 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.872974 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896747 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896899 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.916006 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.964039 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999047 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999816 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999921 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:07.999959 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.036060 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.075157 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.097765 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:32:46.966474103 +0000 UTC Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102741 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102787 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.127887 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.175239 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.192086 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204929 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204945 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204956 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.233868 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.279271 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307784 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307826 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.322818 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.345657 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/0.log" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.348019 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.348389 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.352475 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.395457 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409909 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409957 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409980 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.433819 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.481647 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512317 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512399 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512412 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.518101 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.561162 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.591689 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615066 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615140 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615152 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.635680 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.672139 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.710415 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717792 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717804 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717833 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.757730 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.792416 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820593 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820602 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820626 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.830438 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.872309 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.914845 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923095 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923156 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.956753 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.998014 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029561 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029707 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029731 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.090083 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.090119 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.090160 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.090268 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.090287 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.090412 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.090504 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.090629 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.097892 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:40:08.857782256 +0000 UTC Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132869 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.235965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.236007 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.236017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.236031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.236041 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339910 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339958 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339970 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339978 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.351666 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/1.log" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.352101 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/0.log" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.355400 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" exitCode=1 Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.355432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.355460 4984 scope.go:117] "RemoveContainer" containerID="c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.356457 4984 scope.go:117] "RemoveContainer" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.356630 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.376902 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.388793 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.406505 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.420339 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.432149 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.447899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.447961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.447980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.447917 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.448002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.448331 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.468738 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.480124 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.491686 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.504384 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.513511 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.524119 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.539286 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551156 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551184 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551192 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551388 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.562423 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.593927 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.635784 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653355 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653376 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.755973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.756031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.756053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.756129 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.756152 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.798369 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72"] Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.798988 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.802101 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.802675 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.817040 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.830345 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.845828 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858384 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858421 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858430 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858454 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.867876 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.888678 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.913529 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.928173 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.928221 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6bcf749c-5a91-4939-9805-775678104b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.928333 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.928374 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrnsd\" (UniqueName: \"kubernetes.io/projected/6bcf749c-5a91-4939-9805-775678104b43-kube-api-access-qrnsd\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.951110 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963648 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963724 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963753 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963802 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.995225 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.029062 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.029135 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6bcf749c-5a91-4939-9805-775678104b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.029197 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.029299 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrnsd\" (UniqueName: \"kubernetes.io/projected/6bcf749c-5a91-4939-9805-775678104b43-kube-api-access-qrnsd\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.030747 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.031074 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.031710 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.036913 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6bcf749c-5a91-4939-9805-775678104b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.059536 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrnsd\" (UniqueName: \"kubernetes.io/projected/6bcf749c-5a91-4939-9805-775678104b43-kube-api-access-qrnsd\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.066708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.066955 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.066978 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.067005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.067026 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.092576 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.098456 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:24:04.402595199 +0000 UTC Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.122291 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.140485 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169004 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169051 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169064 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169091 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.172595 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.210303 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.253326 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275142 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275158 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275170 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.293120 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.335013 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.363206 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" event={"ID":"6bcf749c-5a91-4939-9805-775678104b43","Type":"ContainerStarted","Data":"8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.363275 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" event={"ID":"6bcf749c-5a91-4939-9805-775678104b43","Type":"ContainerStarted","Data":"8b738624a6a0db2711461f6dc1648c59fe76241c9b4e04fabb15e662c690eceb"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.366807 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/1.log" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.371887 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.373567 4984 scope.go:117] "RemoveContainer" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" Jan 30 10:12:10 crc kubenswrapper[4984]: E0130 10:12:10.373820 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377195 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377208 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377237 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.413028 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.450240 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.479508 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.479747 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.479842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.479935 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.480031 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.490231 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.532429 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.576041 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583621 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583665 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583677 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.614014 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.650781 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685707 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685771 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685785 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685818 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.691503 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.735900 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.772626 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.788669 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.789038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.789215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.789465 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.789659 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.815305 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.869298 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892453 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.899883 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.934103 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.971825 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.994720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.994989 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.995088 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.995174 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.995289 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.012549 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.064628 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.089852 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:11 crc kubenswrapper[4984]: E0130 10:12:11.090056 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.090135 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.090194 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:11 crc kubenswrapper[4984]: E0130 10:12:11.090321 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.090462 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:11 crc kubenswrapper[4984]: E0130 10:12:11.090717 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:11 crc kubenswrapper[4984]: E0130 10:12:11.090829 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098073 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098086 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098106 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098119 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.099326 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:45:50.87602382 +0000 UTC Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200400 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200523 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200541 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303218 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303286 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303296 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303319 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.376656 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" event={"ID":"6bcf749c-5a91-4939-9805-775678104b43","Type":"ContainerStarted","Data":"556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.391000 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.402817 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409724 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409733 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409760 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.418936 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.428789 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.441566 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.450486 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.459532 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.470718 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.483776 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.494173 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.506774 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511365 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511415 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511443 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.535828 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.572016 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.609823 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613366 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613379 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613399 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613413 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.653123 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.698420 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715742 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715790 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715802 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715818 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715830 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.735636 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.817991 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.818035 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.818046 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.818062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.818072 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921283 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.024856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.024930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.024954 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.025175 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.025198 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.102214 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:16:16.808139288 +0000 UTC Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127322 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127334 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229815 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229896 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333528 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333990 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.436902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.436965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.436983 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.437061 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.437084 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.539701 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.539982 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.539995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.540011 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.540023 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644698 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648076 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648097 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648141 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.670099 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674506 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674529 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.695016 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702287 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702378 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702422 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.721334 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725537 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725604 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725623 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725669 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.745184 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749603 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749803 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.771046 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.771185 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772798 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.857966 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.858124 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858203 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.858157411 +0000 UTC m=+53.424461275 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858316 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858348 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858365 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.858382 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858425 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.858404927 +0000 UTC m=+53.424708821 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.858494 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.858549 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858647 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858706 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858763 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858781 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858787 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.859384 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.859346288 +0000 UTC m=+53.425650172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.859440 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.85941314 +0000 UTC m=+53.425717004 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.859476 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.859459951 +0000 UTC m=+53.425763815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876182 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876223 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876240 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.959205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.959451 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.959564 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.959539009 +0000 UTC m=+53.525842863 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978871 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978893 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082860 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.090077 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.090132 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.090161 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.090077 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:13 crc kubenswrapper[4984]: E0130 10:12:13.090279 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:13 crc kubenswrapper[4984]: E0130 10:12:13.090430 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:13 crc kubenswrapper[4984]: E0130 10:12:13.090595 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:13 crc kubenswrapper[4984]: E0130 10:12:13.090765 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.103020 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:19:14.920434323 +0000 UTC Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186428 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186482 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186502 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289844 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289916 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289956 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392670 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392799 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495234 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495285 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495326 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598335 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598356 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598399 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.701947 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.702220 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.702244 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.702310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.702332 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.805927 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.806005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.806024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.806052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.806071 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908849 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908940 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908959 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908972 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011954 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.090476 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.104051 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:53:33.530299708 +0000 UTC Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116565 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116584 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116626 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218914 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218949 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321066 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321083 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321117 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.392443 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.395462 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.396523 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.420618 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424979 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.436319 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.448528 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.461121 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.476400 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.489370 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.506952 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.519799 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527304 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527356 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.535155 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.548608 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.562466 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.584847 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.599225 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.611947 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.623415 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635043 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635055 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635071 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635083 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.637679 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.672979 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738180 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738216 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841064 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841116 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841145 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943870 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943910 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046620 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046692 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046712 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046756 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.089272 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.089320 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.089325 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.089281 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:15 crc kubenswrapper[4984]: E0130 10:12:15.089444 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:15 crc kubenswrapper[4984]: E0130 10:12:15.089570 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:15 crc kubenswrapper[4984]: E0130 10:12:15.089606 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:15 crc kubenswrapper[4984]: E0130 10:12:15.089689 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.104158 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:26:00.481509751 +0000 UTC Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149136 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149245 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252846 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252864 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355539 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355598 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355627 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457585 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457633 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457670 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560314 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560345 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560357 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662573 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662639 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662657 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765311 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765399 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765415 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765443 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867771 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867799 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970786 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970880 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970897 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.076496 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.076571 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.076591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.076728 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.077485 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.102598 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.104722 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:02:40.714001082 +0000 UTC Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.116850 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.128837 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.140406 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.155116 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.173810 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179508 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.195385 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.211689 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.222283 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.237065 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.251069 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.265692 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281516 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281566 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281504 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281579 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281699 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281718 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.295016 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.305075 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.317019 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.331822 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384275 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384329 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384354 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.486911 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.487189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.487288 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.487385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.487458 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590508 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590570 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590590 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590615 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590633 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.693920 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.693976 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.693992 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.694015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.694029 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797088 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797188 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797242 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900614 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900728 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900743 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003506 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003528 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.090142 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.090239 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.090143 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:17 crc kubenswrapper[4984]: E0130 10:12:17.090372 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:17 crc kubenswrapper[4984]: E0130 10:12:17.090458 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:17 crc kubenswrapper[4984]: E0130 10:12:17.090568 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.090948 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:17 crc kubenswrapper[4984]: E0130 10:12:17.091228 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.105010 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:03:31.253546697 +0000 UTC Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108214 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108270 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108350 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108375 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108394 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211254 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211304 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211338 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211361 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314724 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314865 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.417825 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.418088 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.418161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.418227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.418320 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521756 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521835 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624347 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624366 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624392 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624413 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726727 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726737 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726751 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726762 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829076 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829125 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829136 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829165 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932363 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932388 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932405 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.034900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.034970 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.034999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.035028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.035053 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.105227 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:33:59.812487889 +0000 UTC Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138318 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138335 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241527 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241571 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344641 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344697 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446927 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446941 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550401 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550511 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550554 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653770 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653834 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653853 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653881 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653901 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757242 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757318 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861316 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964549 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964702 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964774 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067524 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067566 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067583 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.089263 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.089352 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.089357 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.089378 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:19 crc kubenswrapper[4984]: E0130 10:12:19.089558 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:19 crc kubenswrapper[4984]: E0130 10:12:19.089716 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:19 crc kubenswrapper[4984]: E0130 10:12:19.090028 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:19 crc kubenswrapper[4984]: E0130 10:12:19.090218 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.106338 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:59:13.555754479 +0000 UTC Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171058 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171661 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275533 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377790 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377838 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377887 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377901 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.480881 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.481386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.481665 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.481997 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.482356 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.586263 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.586649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.586866 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.587070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.587318 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691150 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691178 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794772 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794832 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794865 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794885 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897868 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897944 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897956 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897976 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897990 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001398 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001494 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001507 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104408 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104464 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104512 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.107301 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:14:31.932290508 +0000 UTC Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.207911 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.208147 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.208322 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.208416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.208537 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311747 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311764 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311808 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415120 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415231 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.517599 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.518011 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.518254 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.518540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.518727 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623396 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623459 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623491 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623503 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.726515 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.726951 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.727031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.727117 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.727177 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830115 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830164 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830184 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.933803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.934169 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.934394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.934583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.934710 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038169 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038241 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038262 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038318 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038334 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.089823 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.089918 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.089852 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.089822 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:21 crc kubenswrapper[4984]: E0130 10:12:21.090041 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:21 crc kubenswrapper[4984]: E0130 10:12:21.090120 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:21 crc kubenswrapper[4984]: E0130 10:12:21.090283 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:21 crc kubenswrapper[4984]: E0130 10:12:21.090401 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.107804 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:18:39.301138004 +0000 UTC Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141322 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141340 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141369 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141391 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245352 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245367 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348698 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348722 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348734 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451512 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451527 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451548 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451561 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555041 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555111 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555128 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555142 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.657873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.658011 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.658028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.658050 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.658066 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.761452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.761739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.761923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.762149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.762366 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867444 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867469 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970608 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970624 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970661 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073102 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073141 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073167 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073178 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.108330 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:35:15.758652209 +0000 UTC Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175569 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175588 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175610 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175628 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278282 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278340 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278351 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278370 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278382 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381216 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381340 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484000 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484065 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484076 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586932 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689751 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689761 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792554 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792585 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896511 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896538 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896555 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917453 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917520 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917543 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917559 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: E0130 10:12:22.932093 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:22Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937150 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937222 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937253 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937309 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937329 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: E0130 10:12:22.957725 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:22Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962213 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962324 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: E0130 10:12:22.979963 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:22Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985234 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985312 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.005096 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:23Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009359 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.023956 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:23Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.024175 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026547 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026559 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.089135 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.089163 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.089305 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.089379 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.089494 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.089761 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.089924 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.090029 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.108608 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:08:02.189983964 +0000 UTC Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.129924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.129976 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.129996 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.130020 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.130037 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.233945 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.234372 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.234517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.234808 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.234937 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338888 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443347 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443375 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443396 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546523 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546545 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546563 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649083 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649133 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649144 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649175 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.752667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.752954 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.753075 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.753172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.753235 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.856721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.857163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.857389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.857606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.857790 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960523 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960547 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960580 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960601 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063907 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063931 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063940 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.108903 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:18:40.946832977 +0000 UTC Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165950 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165990 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269283 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371350 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371417 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473482 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473510 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575481 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575531 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575593 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678556 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678602 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678620 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678642 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678660 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781641 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781685 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884822 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884897 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884974 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987423 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987446 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987499 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.089564 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.089618 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.089566 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.089700 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:25 crc kubenswrapper[4984]: E0130 10:12:25.089871 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:25 crc kubenswrapper[4984]: E0130 10:12:25.090058 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:25 crc kubenswrapper[4984]: E0130 10:12:25.090227 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:25 crc kubenswrapper[4984]: E0130 10:12:25.090464 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.090987 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091050 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091071 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091095 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091114 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091489 4984 scope.go:117] "RemoveContainer" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.109896 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:01:03.670637822 +0000 UTC Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193461 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193493 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193509 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296269 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296331 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.398946 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.399015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.399037 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.399060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.399075 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.441335 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/1.log" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.445320 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.445913 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.468083 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.500349 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501352 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501425 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501454 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501476 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.517653 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.542834 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.557792 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.587053 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604078 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604118 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604127 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604142 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604153 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.605943 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.624940 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.639843 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.648722 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.657180 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.668490 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.677466 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.690937 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707936 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.708965 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.721120 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.733761 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.809936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.809995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.810007 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.810034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.810046 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911933 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911958 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014156 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014168 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014176 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.108916 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.109995 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:21:08.564387242 +0000 UTC Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117068 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117106 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117123 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.129915 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.147249 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.161021 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.181300 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.195960 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.215233 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218864 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218872 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218883 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218893 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.234983 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.259291 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.275616 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.289817 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.309098 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.319990 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320676 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320750 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.332516 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.346341 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.359407 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.368826 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422516 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422528 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.450349 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/2.log" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.451225 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/1.log" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.454200 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" exitCode=1 Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.454296 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.454350 4984 scope.go:117] "RemoveContainer" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.454868 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:26 crc kubenswrapper[4984]: E0130 10:12:26.455014 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.472157 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.491913 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.509169 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.521184 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526878 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526936 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.532392 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.542888 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.553890 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.564061 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.576029 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.589379 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.601008 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.612984 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.626626 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629309 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629355 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.639337 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.651841 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.664336 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.676621 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732507 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836361 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836518 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939209 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939313 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939348 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042563 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042605 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042621 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.089775 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.089871 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.089888 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.090007 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.090002 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.090164 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.090216 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.090307 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.110857 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:49:57.567798732 +0000 UTC Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144890 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144907 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144949 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248508 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248531 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248539 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.351924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.351990 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.352010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.352034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.352050 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454539 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454599 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454628 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.459089 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/2.log" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.462797 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.462936 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.479304 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.492227 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.514639 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.524860 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.539228 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.557900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.557967 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.557980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.557998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.558372 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.558009 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.571665 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.590362 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.601487 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.615886 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.628997 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.640644 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.653178 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661787 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661806 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661819 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.665207 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.678959 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.696579 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.720010 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764742 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764816 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764831 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764860 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868145 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868220 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868234 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868270 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868283 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971184 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971238 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971303 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971359 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074188 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074284 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.111789 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:32:07.613550752 +0000 UTC Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.176838 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.177187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.177386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.177579 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.177781 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280527 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280589 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280635 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384384 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384444 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384465 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384507 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487725 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487750 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487768 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590364 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590374 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590398 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700494 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700569 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700581 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.803705 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.803988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.804149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.804353 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.804522 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908201 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908217 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908290 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908317 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939450 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939584 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.939555025 +0000 UTC m=+85.505858879 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939638 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939714 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939767 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939799 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939842 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939870 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939889 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939923 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939948 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.939929393 +0000 UTC m=+85.506233247 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939973 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.939959544 +0000 UTC m=+85.506263398 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940055 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940077 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940097 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940152 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.940136468 +0000 UTC m=+85.506440322 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940208 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940285 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.9402348 +0000 UTC m=+85.506538654 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011353 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.040734 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.041043 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.041391 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:01.041317451 +0000 UTC m=+85.607621315 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.090135 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.090335 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.090824 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.090947 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.091064 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.091190 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.091307 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.091392 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.111969 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:53:56.134007885 +0000 UTC Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114480 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114505 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114523 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217629 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217665 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217680 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.300978 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.315402 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.320937 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.321187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.321348 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.321490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.321614 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.324395 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.339819 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.355717 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.375364 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.399909 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424711 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424761 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424792 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424806 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.431093 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.443020 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.455891 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.475583 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.492641 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.505472 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.519674 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527325 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527408 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.538503 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.552825 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.566336 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.577919 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.589407 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629423 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629465 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629480 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.684839 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.698320 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.713889 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.726325 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732097 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732227 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.737994 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.749885 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.770699 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.783378 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.800763 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.813371 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.826190 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834460 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.840386 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.860382 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.885524 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.904974 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.926954 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936767 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936828 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936844 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936868 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936888 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.946514 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.957698 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.971400 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.039979 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.040052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.040079 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.040109 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.040131 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.113035 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:58:32.0404794 +0000 UTC Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143111 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143244 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246401 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246466 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349056 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349117 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349179 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452554 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452585 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452622 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555351 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555365 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658131 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658176 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658202 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760311 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760383 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760418 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863675 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863698 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863716 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966124 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966195 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966238 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068846 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068873 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.089620 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.089664 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.089638 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:31 crc kubenswrapper[4984]: E0130 10:12:31.089769 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.089638 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:31 crc kubenswrapper[4984]: E0130 10:12:31.089924 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:31 crc kubenswrapper[4984]: E0130 10:12:31.089969 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:31 crc kubenswrapper[4984]: E0130 10:12:31.090073 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.113747 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:59:19.253549354 +0000 UTC Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.170987 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.171027 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.171038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.171053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.171065 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272783 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272881 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272899 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.374955 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.375008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.375024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.375046 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.375064 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477876 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580531 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580573 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580605 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580619 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683630 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683663 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683688 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787100 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787125 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787174 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890705 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890792 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890853 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890874 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993631 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096307 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096357 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096374 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.113914 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:35:06.007526705 +0000 UTC Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.198662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.198937 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.199039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.199167 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.199310 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302523 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302567 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405220 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507319 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507380 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507415 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.611962 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.612025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.612038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.612054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.612066 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714671 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714710 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714750 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816836 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816850 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816859 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919197 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919208 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919216 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021529 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021610 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021635 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.089325 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.089402 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.089456 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.089418 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.089586 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.089730 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.090060 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.090371 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.114933 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:23:01.142018713 +0000 UTC Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124337 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124372 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124409 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227610 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.288930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.288973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.288984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.289002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.289015 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.311316 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316370 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316490 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.334088 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340050 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340117 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340140 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340171 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340193 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.355401 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359619 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359676 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.372060 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376083 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376092 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.389354 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.389518 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390757 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390786 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493654 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493735 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493777 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596811 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596860 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700007 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700071 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700124 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700147 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.803919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.803977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.804000 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.804028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.804049 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907026 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907067 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907076 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907090 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907099 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009877 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009940 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009953 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009961 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112629 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112676 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112699 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.116028 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:06:12.109870917 +0000 UTC Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215078 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215112 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215149 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317743 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317757 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317772 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317782 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419735 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419809 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419850 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522127 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522144 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624471 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624513 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726370 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726437 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726453 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726467 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829063 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829147 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829166 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829179 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932950 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932961 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036202 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036300 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036313 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036333 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036345 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.089633 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.089676 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.089887 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:35 crc kubenswrapper[4984]: E0130 10:12:35.089881 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:35 crc kubenswrapper[4984]: E0130 10:12:35.090002 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.090068 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:35 crc kubenswrapper[4984]: E0130 10:12:35.090132 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:35 crc kubenswrapper[4984]: E0130 10:12:35.090193 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.116773 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:22:00.23134162 +0000 UTC Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139064 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139213 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139236 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139287 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241817 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241891 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241921 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344485 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344614 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.446911 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.447014 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.447034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.447109 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.447128 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550220 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550284 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652742 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652932 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755220 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755229 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755269 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755281 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858656 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858699 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858715 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962593 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962613 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962667 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065369 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065378 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065402 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.108708 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.117107 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:54:35.880128921 +0000 UTC Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.121389 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.139838 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.157425 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.171563 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.171644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.171712 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.173121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.173065 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.173491 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.193024 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.209579 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.226395 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.246519 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.275015 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276454 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276511 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276535 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.308554 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.329406 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.347196 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.362123 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378746 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378802 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.386073 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.399357 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.408818 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.422536 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480767 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480783 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480807 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480824 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.583937 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.583977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.583985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.583998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.584007 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685963 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685994 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788590 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.890942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.890997 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.891017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.891040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.891057 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993409 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993521 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.089472 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.089473 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.089475 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:37 crc kubenswrapper[4984]: E0130 10:12:37.089749 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:37 crc kubenswrapper[4984]: E0130 10:12:37.089590 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.089490 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:37 crc kubenswrapper[4984]: E0130 10:12:37.089838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:37 crc kubenswrapper[4984]: E0130 10:12:37.089776 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095946 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095962 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095973 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.118077 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:05:13.186726731 +0000 UTC Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198319 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198340 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198349 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301380 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301480 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301488 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301505 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301515 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405579 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405638 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405656 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405694 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507093 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507137 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507151 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507164 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507173 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609957 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609980 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712705 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712725 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712752 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712773 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.817918 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.818400 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.818425 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.818456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.818479 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920684 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.023895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.023977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.024010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.024039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.024060 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.118333 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:25:43.604111783 +0000 UTC Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126138 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126161 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229213 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229240 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229274 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332211 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332274 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332283 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332298 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332308 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435822 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435840 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539194 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539303 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539339 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642447 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642525 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642565 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642619 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.745857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.745944 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.745969 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.745994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.746016 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848561 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848634 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848674 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951621 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951750 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951774 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053797 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.089223 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.089240 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.089285 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.089236 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:39 crc kubenswrapper[4984]: E0130 10:12:39.089370 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:39 crc kubenswrapper[4984]: E0130 10:12:39.089508 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:39 crc kubenswrapper[4984]: E0130 10:12:39.089659 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:39 crc kubenswrapper[4984]: E0130 10:12:39.089706 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.119329 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:45:47.713691812 +0000 UTC Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155865 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155955 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155985 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259449 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259468 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362741 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362790 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362808 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362830 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362848 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464765 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464852 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464867 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567518 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567530 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670877 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670932 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670955 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773090 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773186 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773202 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773214 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875413 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875453 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875488 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875523 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977476 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977598 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977631 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.079997 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.080243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.080358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.080434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.080496 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.119724 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:34:39.923055121 +0000 UTC Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183559 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183630 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.286986 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.287052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.287066 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.287084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.287095 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389491 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389540 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492864 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492967 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492979 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.596398 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.596849 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.596955 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.597062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.597162 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699865 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699875 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802140 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802168 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802200 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.904933 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.904995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.905015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.905040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.905057 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008164 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008352 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.093311 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.093456 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.093659 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.093722 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.093713 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.094007 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.094369 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.094684 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.094985 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.095152 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110634 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110676 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.120787 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:00:19.182518463 +0000 UTC Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.212917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.212974 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.212996 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.213025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.213045 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315433 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417853 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417903 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417915 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417933 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417947 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519866 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519886 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519932 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.625659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.625961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.626033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.626111 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.626173 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729185 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729293 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831371 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831414 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831425 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831451 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.933930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.934206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.934433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.934549 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.934634 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037047 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037108 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037126 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037137 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.121511 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:25:01.269652653 +0000 UTC Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139696 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242314 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242388 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242409 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242453 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.344694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.345118 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.345181 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.345269 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.345328 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447342 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447419 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550286 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550296 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550320 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652217 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652264 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652275 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652288 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652298 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754671 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754718 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754733 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754742 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857670 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857693 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857713 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.960886 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.960965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.960988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.961015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.961035 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064413 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064476 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064501 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.089164 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.089203 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.089215 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.089180 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.089380 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.089528 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.089679 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.089785 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.122507 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:20:57.894448535 +0000 UTC Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166957 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166998 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269620 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269651 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269663 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269704 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372552 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372615 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372642 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475184 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475272 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475313 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475330 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575153 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575198 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575241 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.593734 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.597944 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.597985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.597994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.598007 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.598016 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.611713 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615268 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615378 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615404 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.633843 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637743 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637856 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.656839 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661430 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661468 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.677944 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.678112 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679770 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679779 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782603 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782615 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782624 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885099 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885175 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885200 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885229 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885453 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988564 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988575 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091702 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091785 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091829 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.122799 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:31:57.753617379 +0000 UTC Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194195 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194271 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194282 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296461 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296514 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296546 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398302 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398398 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398410 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398440 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500699 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500712 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500721 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602440 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602451 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602479 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705208 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705266 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705275 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705301 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.807917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.807976 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.807984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.807998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.808007 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910480 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910531 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012589 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012599 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.089276 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.089319 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.089386 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:45 crc kubenswrapper[4984]: E0130 10:12:45.089509 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.089559 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:45 crc kubenswrapper[4984]: E0130 10:12:45.089720 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:45 crc kubenswrapper[4984]: E0130 10:12:45.089752 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:45 crc kubenswrapper[4984]: E0130 10:12:45.089838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114419 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114428 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.123517 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:10:31.911648125 +0000 UTC Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216564 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216620 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216630 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318771 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318810 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318848 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420847 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420925 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420937 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.516515 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/0.log" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.516559 4984 generic.go:334] "Generic (PLEG): container finished" podID="0c5bace6-b520-4c9e-be10-a66fea4f9130" containerID="435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e" exitCode=1 Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.516589 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerDied","Data":"435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.516925 4984 scope.go:117] "RemoveContainer" containerID="435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522750 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522767 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522778 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.528360 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.539679 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.552026 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.560707 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.569595 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.582946 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.591351 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.603743 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.615013 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625292 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625328 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625408 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625416 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.631571 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.641583 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.657496 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.673774 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.684944 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.695592 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.707689 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.717919 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727423 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727810 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727903 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.729858 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830339 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830785 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830914 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933747 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038524 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.101211 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.113290 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.124395 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:46:31.588911658 +0000 UTC Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.126206 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140280 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140353 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140365 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140379 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140409 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.144875 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.167302 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.183976 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.196586 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.206873 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.217391 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.226755 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.236952 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242447 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242463 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242476 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.254885 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.271845 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.281697 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.292241 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.306050 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.319541 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.331109 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345115 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345176 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345194 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345218 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345237 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447398 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447486 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.522196 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/0.log" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.522280 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.536714 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.547775 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549887 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549916 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549927 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.559329 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.580199 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.601003 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.610979 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.621711 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.630306 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.641295 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.648806 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651872 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651929 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651953 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651986 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.657809 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.671010 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.685070 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.695298 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.708701 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.721774 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.731452 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.742849 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754624 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754669 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754693 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754702 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857546 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857600 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857633 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857646 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960476 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960571 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960617 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063235 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063274 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063304 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.090147 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.090173 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.090194 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.090206 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:47 crc kubenswrapper[4984]: E0130 10:12:47.090288 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:47 crc kubenswrapper[4984]: E0130 10:12:47.090371 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:47 crc kubenswrapper[4984]: E0130 10:12:47.090497 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:47 crc kubenswrapper[4984]: E0130 10:12:47.090583 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.124787 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:39:02.502988753 +0000 UTC Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166283 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166351 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166360 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269679 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269703 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269720 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372460 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372608 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372624 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474607 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474615 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577165 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577176 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680067 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680102 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680114 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.782993 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.783028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.783038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.783054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.783065 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885518 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885530 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987709 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089884 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089932 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089963 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.125046 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:37:14.350439591 +0000 UTC Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192879 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192955 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295568 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295633 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398493 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398546 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398563 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398572 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501018 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501056 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501068 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501098 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603449 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603476 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603505 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603524 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706300 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706372 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706380 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809267 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809319 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809328 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911126 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911201 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013604 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013615 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.090136 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.090164 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.090150 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:49 crc kubenswrapper[4984]: E0130 10:12:49.090276 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.090308 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:49 crc kubenswrapper[4984]: E0130 10:12:49.090376 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:49 crc kubenswrapper[4984]: E0130 10:12:49.090546 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:49 crc kubenswrapper[4984]: E0130 10:12:49.090576 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.116978 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.117005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.117013 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.117025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.117034 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.125582 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:28:33.698310343 +0000 UTC Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.219938 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.219988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.220002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.220029 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.220045 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323171 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425868 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425878 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425907 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528219 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630299 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630335 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630375 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732091 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732126 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732137 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732150 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732161 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834543 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834556 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834566 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937638 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937670 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937681 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039626 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039663 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039671 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.126361 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:52:53.219580141 +0000 UTC Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142347 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142418 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.244940 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.244977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.244989 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.245004 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.245014 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.354891 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.354978 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.354995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.355017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.355034 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458179 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458450 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458657 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560564 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560577 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560585 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663057 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663091 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663131 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766175 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869410 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869484 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972120 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972164 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074850 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074865 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074877 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.089625 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.089627 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.089718 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:51 crc kubenswrapper[4984]: E0130 10:12:51.089805 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:51 crc kubenswrapper[4984]: E0130 10:12:51.089907 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:51 crc kubenswrapper[4984]: E0130 10:12:51.089947 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.090196 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:51 crc kubenswrapper[4984]: E0130 10:12:51.090312 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.126829 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:01:10.219525363 +0000 UTC Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176754 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176779 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176787 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176836 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279059 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279069 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279082 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279091 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381625 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381676 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381690 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381698 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483546 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483577 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586214 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586290 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586318 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689316 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791838 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791910 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791934 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895090 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895101 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997410 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099922 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.127808 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:35:04.257070332 +0000 UTC Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202454 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202534 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305507 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305531 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408408 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408448 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512090 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512174 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512241 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614888 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718102 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718125 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718155 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718187 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821515 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821547 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925639 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925663 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925682 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.027994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.028086 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.028110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.028141 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.028163 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.089730 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:53 crc kubenswrapper[4984]: E0130 10:12:53.090607 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.090729 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.090930 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:53 crc kubenswrapper[4984]: E0130 10:12:53.090965 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:53 crc kubenswrapper[4984]: E0130 10:12:53.091081 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.091328 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:53 crc kubenswrapper[4984]: E0130 10:12:53.091799 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.092307 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.127986 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:41:40.58654989 +0000 UTC Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131732 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234505 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234635 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337704 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337715 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440871 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440910 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440926 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544214 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544232 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.547643 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/2.log" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.552007 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.554382 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.582755 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.604294 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.635377 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647702 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647825 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647850 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.665425 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.701452 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.717595 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.736465 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750063 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750118 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750158 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750177 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.752308 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.775703 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.804758 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.819490 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.839442 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852705 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852815 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.856974 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.873296 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.887111 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.903215 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.917501 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.930365 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955608 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955622 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955640 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058537 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058573 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058598 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058608 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066580 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066629 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.081532 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086159 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086192 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086207 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.103289 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106817 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106855 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106866 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106892 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.118551 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122818 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122853 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.128976 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:30:31.170364875 +0000 UTC Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.133944 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137566 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137576 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137605 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.149127 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.149243 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.160997 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.161036 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.161045 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.161060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.161070 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264118 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264199 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367589 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367637 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470576 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470701 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470767 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.558517 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.559925 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/2.log" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.564814 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" exitCode=1 Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.564862 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.564905 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.566048 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.566368 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574548 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.580416 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.596988 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.616229 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.634022 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.647369 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.664278 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679006 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679063 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679116 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679140 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.680979 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.700664 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.729478 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.754879 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.768079 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.778966 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.782975 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.783023 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.783048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.783080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.783103 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.794647 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.811096 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.825635 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.840041 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.858957 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.873648 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886538 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886597 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989559 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989588 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989630 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.089131 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.089207 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.089221 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.089293 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.089149 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.089386 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.089518 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.089677 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094552 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094625 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.129308 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:09:19.203293048 +0000 UTC Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197515 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197559 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197573 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197583 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299861 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299921 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402619 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402716 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402770 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.505893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.505950 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.505969 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.505993 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.506014 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.570814 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.575121 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.575311 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.587516 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.604415 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.608923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.608978 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.608994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.609016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.609032 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.622674 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.640212 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.656626 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.670134 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.687804 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.704481 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712021 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712186 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.720680 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.733617 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.747108 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.762219 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.785367 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.812897 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814695 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.827116 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.842609 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.854826 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.866234 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917186 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917273 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917311 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917326 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020489 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020599 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020616 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.107008 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123418 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123554 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.126982 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.129825 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:01:31.424002619 +0000 UTC Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.144067 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.157624 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.174074 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.198939 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.216384 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226622 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226635 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.232455 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.255873 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.271227 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.287146 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.319331 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329942 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.350373 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.368770 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.391134 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.409334 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.424453 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.432943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.432999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.433016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.433046 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.433064 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.441815 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536106 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536184 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638990 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743531 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743564 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743586 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846744 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846831 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846874 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949438 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949513 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949525 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949535 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052624 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052643 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052684 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.089743 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.089745 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:57 crc kubenswrapper[4984]: E0130 10:12:57.090197 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.089882 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.089807 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:57 crc kubenswrapper[4984]: E0130 10:12:57.090369 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:57 crc kubenswrapper[4984]: E0130 10:12:57.090480 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:57 crc kubenswrapper[4984]: E0130 10:12:57.090569 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.130379 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:17:30.537997885 +0000 UTC Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156669 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156796 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156824 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156840 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260149 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362710 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362867 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362918 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465344 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465420 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465487 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.568886 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.568979 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.569023 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.569052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.569069 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671684 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671782 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671824 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671840 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775440 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775549 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775566 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878866 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878883 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878930 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.986585 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.986695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.986874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.986999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.987043 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.091991 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.092056 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.092074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.092096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.092113 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.131564 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:57:36.493849392 +0000 UTC Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.196829 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.196914 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.196952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.196988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.197011 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299656 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299731 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299784 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299805 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.402952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.403008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.403025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.403051 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.403069 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506634 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506661 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506683 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609619 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609635 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609654 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609676 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.712921 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.712966 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.712980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.712999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.713013 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815727 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815752 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815783 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815801 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918879 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020898 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020934 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020945 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020973 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.089737 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.089747 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.089813 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.089924 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:59 crc kubenswrapper[4984]: E0130 10:12:59.090046 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:59 crc kubenswrapper[4984]: E0130 10:12:59.090200 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:59 crc kubenswrapper[4984]: E0130 10:12:59.090311 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:59 crc kubenswrapper[4984]: E0130 10:12:59.090394 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124847 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124860 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.131991 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:58:27.650482125 +0000 UTC Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227875 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227938 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227951 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227979 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330281 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330295 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.432998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.433085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.433113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.433143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.433167 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535845 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535872 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638947 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638956 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638982 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741623 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843953 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843966 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843974 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.946798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.947065 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.947133 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.947199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.947310 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.050760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.051009 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.051077 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.051146 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.051212 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.132109 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:34:17.683163391 +0000 UTC Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.154427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.154650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.154804 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.154959 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.155103 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270821 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270889 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270909 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270934 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270951 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.373912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.373968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.373985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.374012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.374029 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476714 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476765 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476807 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579633 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579650 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682095 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682124 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682136 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784643 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784715 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784737 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784753 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888627 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888733 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888762 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991444 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991455 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991470 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991484 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024487 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024642 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.024622409 +0000 UTC m=+149.590926233 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024640 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024704 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024743 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024767 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024791 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024821 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024834 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024845 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024872 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.024861846 +0000 UTC m=+149.591165670 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024913 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.024890947 +0000 UTC m=+149.591194811 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024913 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024962 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024985 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.025069 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.025044301 +0000 UTC m=+149.591348165 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.025067 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.025221 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.025175975 +0000 UTC m=+149.591479859 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.090030 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.090100 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.090168 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.090464 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.090504 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.090970 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.091162 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.090710 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093713 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093881 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.126199 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.126529 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.126652 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.126621384 +0000 UTC m=+149.692925248 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.133155 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:42:49.741078188 +0000 UTC Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197353 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197379 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197425 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299755 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402637 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402675 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402687 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402700 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402710 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505794 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505839 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.609988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.610080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.610104 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.610130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.610150 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.712688 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.713024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.713040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.713059 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.713071 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815491 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815547 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815568 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918558 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021492 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124636 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124798 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.133864 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:18:22.401445844 +0000 UTC Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.227961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.228058 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.228082 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.228112 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.228138 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331195 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331290 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331308 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331342 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331365 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433811 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.536954 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.537016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.537032 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.537055 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.537072 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640641 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743669 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743706 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846155 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846197 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846209 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846222 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846231 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949366 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949385 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055532 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055570 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055597 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.089366 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.089431 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.089459 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.089467 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:03 crc kubenswrapper[4984]: E0130 10:13:03.089605 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:03 crc kubenswrapper[4984]: E0130 10:13:03.089838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:03 crc kubenswrapper[4984]: E0130 10:13:03.089938 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:03 crc kubenswrapper[4984]: E0130 10:13:03.090113 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.134815 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:38:30.002087303 +0000 UTC Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158870 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158924 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.261998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.262062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.262073 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.262086 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.262095 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365089 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365165 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365211 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365227 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469158 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469334 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572449 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572500 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675444 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675596 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779266 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779279 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882863 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882910 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882944 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882960 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986475 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089231 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089318 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.135617 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:58:40.861159414 +0000 UTC Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192057 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192123 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192145 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192163 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.295932 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.295971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.295980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.295995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.296006 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399082 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399150 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399198 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458338 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458399 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458446 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.473035 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.477922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.477980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.478000 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.478050 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.478074 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.497392 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502639 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502687 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.519331 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523786 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523822 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523836 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523852 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523865 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.552742 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558345 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558380 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558397 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.578203 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.578409 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580419 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580517 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684099 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684129 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684143 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786761 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786790 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889265 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889303 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889317 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889329 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992638 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992679 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992696 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.089817 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.089854 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:05 crc kubenswrapper[4984]: E0130 10:13:05.089963 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.090033 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:05 crc kubenswrapper[4984]: E0130 10:13:05.090052 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.090087 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:05 crc kubenswrapper[4984]: E0130 10:13:05.090147 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:05 crc kubenswrapper[4984]: E0130 10:13:05.090208 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095196 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095216 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095241 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095290 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.136384 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:39:18.803256216 +0000 UTC Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198329 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198405 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198439 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198479 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301464 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301533 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404414 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507350 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507403 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507413 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507436 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609098 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609109 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609128 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609141 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712137 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712167 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814485 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814505 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917309 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917454 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020642 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020775 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.110181 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122886 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122916 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122927 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122955 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.132401 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.136523 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:38:55.835879051 +0000 UTC Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.152293 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.173637 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.197459 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.212889 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226196 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226328 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226346 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.234180 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.264474 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.297036 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.318592 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329348 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329360 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329391 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.339995 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.356168 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.378004 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.389830 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.402459 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.424343 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432761 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432779 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.445142 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.464423 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535463 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535534 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535568 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638494 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638558 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638576 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638613 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741699 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741844 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741868 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844981 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948286 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948414 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051436 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.089948 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.090107 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.090299 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:07 crc kubenswrapper[4984]: E0130 10:13:07.090285 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.090382 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:07 crc kubenswrapper[4984]: E0130 10:13:07.090531 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:07 crc kubenswrapper[4984]: E0130 10:13:07.090725 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:07 crc kubenswrapper[4984]: E0130 10:13:07.090801 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.137373 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:43:27.97417421 +0000 UTC Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154636 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154664 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154675 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.257895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.257974 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.257996 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.258027 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.258050 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.360883 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.360939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.360968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.360991 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.361008 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464877 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464916 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464933 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.567844 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.567913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.567926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.567947 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.568004 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671529 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671542 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.774992 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.775059 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.775080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.775107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.775126 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878711 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878772 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878784 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878805 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878818 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982579 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086168 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086279 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086294 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.091177 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:13:08 crc kubenswrapper[4984]: E0130 10:13:08.091526 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.138204 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:54:40.196181958 +0000 UTC Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.189922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.189977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.189988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.190008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.190021 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293493 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293514 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293562 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396405 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396440 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396530 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499111 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499175 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499193 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499209 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602830 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602855 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602874 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706100 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706144 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809086 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809098 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809134 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913193 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913231 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016151 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016171 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016224 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.089349 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.089461 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:09 crc kubenswrapper[4984]: E0130 10:13:09.089551 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.089563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.089641 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:09 crc kubenswrapper[4984]: E0130 10:13:09.089821 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:09 crc kubenswrapper[4984]: E0130 10:13:09.090027 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:09 crc kubenswrapper[4984]: E0130 10:13:09.090099 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121825 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121929 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121987 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.138919 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:01:43.386315757 +0000 UTC Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225318 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225360 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327822 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327834 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327866 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431178 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431240 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431342 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431363 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535339 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535361 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637809 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637835 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637854 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740684 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740711 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740724 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844095 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844223 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948145 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948201 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948218 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948228 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050713 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050794 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050838 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.104294 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.139719 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:26:02.758166054 +0000 UTC Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153461 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153502 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256685 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256825 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360447 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360464 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.463843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.463999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.464077 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.464105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.464122 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567563 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567631 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567698 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671539 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671558 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671601 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774670 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774744 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774785 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774804 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878450 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878473 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981894 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981904 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981928 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085604 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085726 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085749 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.089833 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.089861 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.089894 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:11 crc kubenswrapper[4984]: E0130 10:13:11.090003 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.090044 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:11 crc kubenswrapper[4984]: E0130 10:13:11.090164 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:11 crc kubenswrapper[4984]: E0130 10:13:11.090281 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:11 crc kubenswrapper[4984]: E0130 10:13:11.090390 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.140746 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:41:06.89260533 +0000 UTC Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189188 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189298 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.292889 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.292999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.293019 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.293047 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.293067 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396185 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396233 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396303 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396350 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499208 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499277 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499317 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602166 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602234 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602272 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602285 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706301 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706354 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706373 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706407 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809232 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809268 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809278 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911735 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014311 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014325 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014351 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116918 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116986 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116995 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.141759 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:59:45.070841098 +0000 UTC Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219589 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219604 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219626 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219641 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.321948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.322001 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.322017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.322032 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.322043 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.424980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.425039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.425048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.425060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.425069 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527558 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527570 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527584 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527595 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630614 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630692 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630704 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733201 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733275 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836463 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836493 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836505 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.939913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.940004 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.940038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.940072 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.940094 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043347 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043396 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043412 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043447 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.089993 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.090047 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.090018 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.090098 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:13 crc kubenswrapper[4984]: E0130 10:13:13.090178 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:13 crc kubenswrapper[4984]: E0130 10:13:13.090420 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:13 crc kubenswrapper[4984]: E0130 10:13:13.090491 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:13 crc kubenswrapper[4984]: E0130 10:13:13.090576 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.142545 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:02:46.305985641 +0000 UTC Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145741 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145769 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145796 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145808 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248543 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248559 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351940 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454582 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454603 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557298 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557311 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660180 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660214 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.765873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.765953 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.765983 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.766103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.766747 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869167 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869277 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869340 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869363 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972166 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972175 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074903 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074938 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074950 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074975 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.143214 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:12:18.757166062 +0000 UTC Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177838 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177859 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177877 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280804 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280829 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280881 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383635 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383675 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383691 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486860 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486870 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486891 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588690 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588704 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588714 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615308 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615766 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615784 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615793 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.634734 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639280 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639325 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639338 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639371 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.654835 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658880 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658957 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658982 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.676821 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681222 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681240 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681306 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.696843 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700615 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700629 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700640 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.712893 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.713001 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714309 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714344 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816417 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816439 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816448 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918687 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918710 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918721 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020750 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020792 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.089454 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.089463 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.089537 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.089592 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:15 crc kubenswrapper[4984]: E0130 10:13:15.089731 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:15 crc kubenswrapper[4984]: E0130 10:13:15.089949 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:15 crc kubenswrapper[4984]: E0130 10:13:15.090095 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:15 crc kubenswrapper[4984]: E0130 10:13:15.090229 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123718 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123759 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.143957 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:04:57.782447177 +0000 UTC Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226612 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330222 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330311 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433355 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433373 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433414 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535628 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535671 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638963 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742411 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742454 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845481 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845507 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845519 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948794 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948807 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948841 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050875 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050919 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.109461 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.129859 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.144335 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:10:17.164121282 +0000 UTC Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.147114 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83196244-71fa-4003-aa05-0f1a7de9db9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c475de8d49f5aefa32c82d036020b47bc55061e42d5da99bb1052ef7f0ca0b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153834 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153929 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153960 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153985 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.166040 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.182123 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.198740 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.219693 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.237383 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.253510 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257453 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257490 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.272633 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.295907 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.321080 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.336825 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.349584 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359680 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.361955 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.380178 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.393132 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.406979 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.425427 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462846 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462891 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462925 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462941 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566360 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566421 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668693 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668819 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771156 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771238 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771285 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873894 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873941 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873969 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873981 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975812 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975860 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077643 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077840 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.089772 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.089826 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:17 crc kubenswrapper[4984]: E0130 10:13:17.089880 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.089890 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.089908 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:17 crc kubenswrapper[4984]: E0130 10:13:17.089988 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:17 crc kubenswrapper[4984]: E0130 10:13:17.090121 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:17 crc kubenswrapper[4984]: E0130 10:13:17.090288 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.145413 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:57:52.65123404 +0000 UTC Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180035 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180065 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180076 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180092 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180102 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283479 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283518 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.386985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.387039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.387054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.387072 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.387087 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489716 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489759 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489776 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593418 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593440 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695804 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695859 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695885 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798660 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798727 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901538 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901552 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901561 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004384 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004523 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107538 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107559 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.146301 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:09:36.204741492 +0000 UTC Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210552 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210630 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210657 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314063 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314125 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417032 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417073 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417095 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.519942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.520002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.520023 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.520053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.520074 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624794 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624846 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624864 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624907 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727371 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727464 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727514 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830683 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934244 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934333 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934351 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037120 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037185 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.089563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.089634 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.089635 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.089740 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.089733 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.089874 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.089926 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.090315 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.090762 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.090930 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139354 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139800 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139853 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139897 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.146445 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:04:44.893902351 +0000 UTC Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242726 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242743 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242766 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242783 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346301 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346327 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449363 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552578 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655091 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655101 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655121 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758035 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758102 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758131 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758185 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861462 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861543 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964462 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964516 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964537 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.067899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.067964 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.067984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.068008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.068028 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.147660 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:56:23.980491353 +0000 UTC Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171253 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274661 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274724 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274756 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274770 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377577 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377704 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377753 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480849 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480871 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584494 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584506 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584826 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687344 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687392 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687421 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687433 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.790869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.790942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.790962 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.790987 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.791005 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894067 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894156 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894169 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997408 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997512 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997542 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997564 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.090137 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.090205 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.090203 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.090304 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:21 crc kubenswrapper[4984]: E0130 10:13:21.090395 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:21 crc kubenswrapper[4984]: E0130 10:13:21.090517 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:21 crc kubenswrapper[4984]: E0130 10:13:21.090691 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:21 crc kubenswrapper[4984]: E0130 10:13:21.090854 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100727 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100812 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100830 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100874 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.148135 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:50:05.524434661 +0000 UTC Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204421 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204470 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204488 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204511 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204528 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307665 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307707 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411638 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411696 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514491 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514514 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514558 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618188 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618229 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618267 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721524 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721548 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721582 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721603 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824286 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824387 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824430 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824447 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.927419 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.927715 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.927826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.927936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.928029 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030884 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030915 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030927 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133605 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133688 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133701 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.148632 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:24:22.752302153 +0000 UTC Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236621 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236660 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236684 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.340228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.340653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.340789 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.340922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.341046 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444374 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444451 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444491 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547516 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547630 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547655 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651068 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651192 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651219 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.754239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.754673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.754904 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.755134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.755396 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859006 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859055 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859072 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859093 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859109 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962729 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962752 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962769 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.065875 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.065961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.065983 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.066010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.066026 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.089710 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.089708 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.089712 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.089830 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:23 crc kubenswrapper[4984]: E0130 10:13:23.090028 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:23 crc kubenswrapper[4984]: E0130 10:13:23.090114 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:23 crc kubenswrapper[4984]: E0130 10:13:23.090214 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:23 crc kubenswrapper[4984]: E0130 10:13:23.090430 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.149194 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:32:06.781158225 +0000 UTC Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.168919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.168994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.169012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.169034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.169050 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.272959 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.273042 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.273065 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.273092 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.273115 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.375845 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.375982 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.376003 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.376028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.376041 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478936 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582521 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684588 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684628 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684636 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684658 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787688 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787757 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787811 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787832 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891428 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891743 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891757 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995419 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995484 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995538 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099449 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099558 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099619 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.149969 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:44:18.903990151 +0000 UTC Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203064 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203214 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203239 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306371 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306389 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408425 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408487 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511525 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511554 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511579 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614634 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614685 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614703 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614715 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717315 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717364 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820490 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876344 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876413 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876475 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.898970 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.903857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.903961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.903984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.904012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.904028 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.917570 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923529 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923611 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923622 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.944629 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949403 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949418 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949446 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.968827 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973042 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973159 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973179 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.992671 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.992840 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.994958 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.995040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.995070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.995104 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.995128 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.089563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.089592 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.089650 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.089654 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:25 crc kubenswrapper[4984]: E0130 10:13:25.089773 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:25 crc kubenswrapper[4984]: E0130 10:13:25.089881 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:25 crc kubenswrapper[4984]: E0130 10:13:25.089980 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:25 crc kubenswrapper[4984]: E0130 10:13:25.090047 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098404 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.150465 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:30:25.526642384 +0000 UTC Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201488 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201614 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201640 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304790 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304816 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304826 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407330 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407469 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407487 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.509864 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.509899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.509958 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.510032 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.510046 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613420 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613506 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613547 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613562 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.716798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.717013 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.717052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.717082 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.717105 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820068 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820231 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820340 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923685 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923712 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923723 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027357 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.107329 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.125230 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131496 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131539 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131594 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.140032 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.151231 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:49:11.216468381 +0000 UTC Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.153561 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.173394 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.188167 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.199756 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.208101 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83196244-71fa-4003-aa05-0f1a7de9db9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c475de8d49f5aefa32c82d036020b47bc55061e42d5da99bb1052ef7f0ca0b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.218222 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.229634 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.233990 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.234034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.234046 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.234060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.234070 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.242450 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.254474 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.275639 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.288439 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.301558 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.317452 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.332657 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337287 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337299 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337528 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337544 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.354139 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.382404 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440734 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544313 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544359 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647729 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.750472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.750952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.751125 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.751320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.751528 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854369 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854455 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854472 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957510 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060049 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060180 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060205 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.089749 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.089774 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:27 crc kubenswrapper[4984]: E0130 10:13:27.089859 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.089757 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.090021 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:27 crc kubenswrapper[4984]: E0130 10:13:27.090109 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:27 crc kubenswrapper[4984]: E0130 10:13:27.090236 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:27 crc kubenswrapper[4984]: E0130 10:13:27.090296 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.151702 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:10:56.279908438 +0000 UTC Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163370 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163413 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266518 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266542 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266558 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369834 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369867 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369897 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369919 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473514 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473916 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577744 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577796 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680308 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680322 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680332 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784165 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784208 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.887653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.887902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.888033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.888133 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.888219 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990611 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990622 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990637 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990649 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093759 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093861 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093878 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.152535 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:46:03.082569602 +0000 UTC Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.196904 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.197206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.197350 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.197461 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.197563 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300747 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.402851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.402948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.402973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.403010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.403035 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505155 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505211 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505233 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505241 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.607920 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.607971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.607981 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.607993 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.608002 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.710925 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.710974 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.710986 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.711003 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.711015 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.813869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.813972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.813990 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.814012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.814029 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.916861 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.916926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.916952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.916981 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.917005 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020454 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020521 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020537 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.089781 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.089775 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.089806 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:29 crc kubenswrapper[4984]: E0130 10:13:29.090104 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.090144 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:29 crc kubenswrapper[4984]: E0130 10:13:29.090217 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:29 crc kubenswrapper[4984]: E0130 10:13:29.089965 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:29 crc kubenswrapper[4984]: E0130 10:13:29.090322 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122713 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122767 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122792 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.153143 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:03:44.187793843 +0000 UTC Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225668 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225740 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225780 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329014 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329071 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329094 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432304 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432359 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.535858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.535935 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.535960 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.535992 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.536012 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638439 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638494 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740701 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740727 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843317 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843356 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843382 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843393 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946516 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049298 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049360 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049378 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049419 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.090997 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:13:30 crc kubenswrapper[4984]: E0130 10:13:30.091344 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151700 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151814 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.153983 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:09:02.272280635 +0000 UTC Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.254954 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.254998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.255010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.255030 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.255048 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.363554 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.363745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.364581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.364722 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.364822 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469287 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469340 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573097 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573243 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.676973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.677099 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.677140 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.677173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.677193 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780892 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884471 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884529 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884544 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884575 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.986993 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.987034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.987048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.987062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.987071 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089297 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089355 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089374 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.089624 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089698 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089631 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089715 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.089739 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.089798 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.089889 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.154826 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:40:45.750918759 +0000 UTC Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192600 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192658 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192693 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192705 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296124 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296184 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296201 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296281 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.398974 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.399008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.399017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.399029 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.399039 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502235 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502613 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502875 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502985 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.606732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.607088 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.607297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.607409 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.607503 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.698391 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/1.log" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.699630 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/0.log" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.699875 4984 generic.go:334] "Generic (PLEG): container finished" podID="0c5bace6-b520-4c9e-be10-a66fea4f9130" containerID="d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2" exitCode=1 Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.699986 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerDied","Data":"d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.700298 4984 scope.go:117] "RemoveContainer" containerID="435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.701044 4984 scope.go:117] "RemoveContainer" containerID="d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2" Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.701446 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bnkpj_openshift-multus(0c5bace6-b520-4c9e-be10-a66fea4f9130)\"" pod="openshift-multus/multus-bnkpj" podUID="0c5bace6-b520-4c9e-be10-a66fea4f9130" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711891 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711937 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711970 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711983 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.754089 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.75406703 podStartE2EDuration="21.75406703s" podCreationTimestamp="2026-01-30 10:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.730162965 +0000 UTC m=+116.296466799" watchObservedRunningTime="2026-01-30 10:13:31.75406703 +0000 UTC m=+116.320370854" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.776717 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.776694409 podStartE2EDuration="1m29.776694409s" podCreationTimestamp="2026-01-30 10:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.755038888 +0000 UTC m=+116.321342812" watchObservedRunningTime="2026-01-30 10:13:31.776694409 +0000 UTC m=+116.342998253" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.815482 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.815896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.816089 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.816307 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.816449 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.878950 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=95.878935211 podStartE2EDuration="1m35.878935211s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.874309928 +0000 UTC m=+116.440613792" watchObservedRunningTime="2026-01-30 10:13:31.878935211 +0000 UTC m=+116.445239035" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.888880 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=62.888855485 podStartE2EDuration="1m2.888855485s" podCreationTimestamp="2026-01-30 10:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.888105464 +0000 UTC m=+116.454409328" watchObservedRunningTime="2026-01-30 10:13:31.888855485 +0000 UTC m=+116.455159349" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918565 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918608 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.941633 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podStartSLOduration=95.941612648 podStartE2EDuration="1m35.941612648s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.94098517 +0000 UTC m=+116.507289024" watchObservedRunningTime="2026-01-30 10:13:31.941612648 +0000 UTC m=+116.507916472" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.967071 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" podStartSLOduration=94.967046417 podStartE2EDuration="1m34.967046417s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.966881943 +0000 UTC m=+116.533185807" watchObservedRunningTime="2026-01-30 10:13:31.967046417 +0000 UTC m=+116.533350271" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.987006 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=94.986988669 podStartE2EDuration="1m34.986988669s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.985588679 +0000 UTC m=+116.551892513" watchObservedRunningTime="2026-01-30 10:13:31.986988669 +0000 UTC m=+116.553292503" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.014883 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6tdgl" podStartSLOduration=96.014864338 podStartE2EDuration="1m36.014864338s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:32.014747375 +0000 UTC m=+116.581051199" watchObservedRunningTime="2026-01-30 10:13:32.014864338 +0000 UTC m=+116.581168172" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020911 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020925 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020944 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.042495 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" podStartSLOduration=96.04247937 podStartE2EDuration="1m36.04247937s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:32.041706028 +0000 UTC m=+116.608009852" watchObservedRunningTime="2026-01-30 10:13:32.04247937 +0000 UTC m=+116.608783194" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.057659 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-l5dvh" podStartSLOduration=95.057642025 podStartE2EDuration="1m35.057642025s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:32.0567627 +0000 UTC m=+116.623066524" watchObservedRunningTime="2026-01-30 10:13:32.057642025 +0000 UTC m=+116.623945839" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123565 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123595 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.155514 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:58:48.170315447 +0000 UTC Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.225850 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.226207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.226331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.226448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.226567 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328805 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328816 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431593 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431656 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431707 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431730 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534642 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534779 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637815 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637892 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637915 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637933 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.706420 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/1.log" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.740972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.741028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.741039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.741057 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.741068 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844643 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844660 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947375 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947414 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050097 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050109 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050127 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050138 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.089608 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.089613 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.089701 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:33 crc kubenswrapper[4984]: E0130 10:13:33.089785 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:33 crc kubenswrapper[4984]: E0130 10:13:33.089900 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:33 crc kubenswrapper[4984]: E0130 10:13:33.090109 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.090201 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:33 crc kubenswrapper[4984]: E0130 10:13:33.090379 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154415 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154464 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154486 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.156310 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:15:01.089259363 +0000 UTC Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.258317 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.258421 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.258443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.258912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.259195 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361543 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361608 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361626 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361651 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361668 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464617 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464648 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464668 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.567909 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.567965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.567995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.568015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.568028 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671651 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671800 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671831 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.774968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.775114 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.775139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.775169 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.775189 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878775 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.981773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.981824 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.981840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.981862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.982294 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.086580 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.086659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.086680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.087193 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.087239 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.156866 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:40:35.435227464 +0000 UTC Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190623 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190646 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294242 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294281 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.396965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.397035 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.397055 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.397080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.397097 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500582 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500600 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500623 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500641 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603765 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603903 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603924 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.706928 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.707004 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.707027 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.707056 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.707077 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811075 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811151 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811217 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914287 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914356 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914377 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018337 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018489 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:35Z","lastTransitionTime":"2026-01-30T10:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088782 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088797 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088806 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:35Z","lastTransitionTime":"2026-01-30T10:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.089670 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.089759 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.089756 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.089753 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:35 crc kubenswrapper[4984]: E0130 10:13:35.089866 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:35 crc kubenswrapper[4984]: E0130 10:13:35.089945 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:35 crc kubenswrapper[4984]: E0130 10:13:35.090090 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:35 crc kubenswrapper[4984]: E0130 10:13:35.090199 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.145998 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf"] Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.146349 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.148597 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.149126 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.150387 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.150972 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.157341 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:48:27.765167299 +0000 UTC Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.157386 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.175895 4984 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.301850 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.302058 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dda86ca6-bc08-4931-b254-fdcb9483081e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.302433 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dda86ca6-bc08-4931-b254-fdcb9483081e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.302521 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dda86ca6-bc08-4931-b254-fdcb9483081e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.302573 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403600 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dda86ca6-bc08-4931-b254-fdcb9483081e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403661 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dda86ca6-bc08-4931-b254-fdcb9483081e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403684 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403735 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403769 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dda86ca6-bc08-4931-b254-fdcb9483081e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403925 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.404369 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.404672 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dda86ca6-bc08-4931-b254-fdcb9483081e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.413360 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dda86ca6-bc08-4931-b254-fdcb9483081e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.426145 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dda86ca6-bc08-4931-b254-fdcb9483081e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.475471 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.717970 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" event={"ID":"dda86ca6-bc08-4931-b254-fdcb9483081e","Type":"ContainerStarted","Data":"9acb19d12c8f42e6ff263498c55bc7f980a6c9a454367aa8f8009c9fea132ed4"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.718013 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" event={"ID":"dda86ca6-bc08-4931-b254-fdcb9483081e","Type":"ContainerStarted","Data":"9be3a224d7bc76cee8c1d6659423194c6d53348bf17c8698f3f8df0e0f7ca8e0"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.732055 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" podStartSLOduration=98.731988673 podStartE2EDuration="1m38.731988673s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:35.73084038 +0000 UTC m=+120.297144214" watchObservedRunningTime="2026-01-30 10:13:35.731988673 +0000 UTC m=+120.298292527" Jan 30 10:13:36 crc kubenswrapper[4984]: E0130 10:13:36.045770 4984 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 10:13:36 crc kubenswrapper[4984]: E0130 10:13:36.182587 4984 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 10:13:37 crc kubenswrapper[4984]: I0130 10:13:37.089835 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:37 crc kubenswrapper[4984]: I0130 10:13:37.089872 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:37 crc kubenswrapper[4984]: I0130 10:13:37.089895 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:37 crc kubenswrapper[4984]: I0130 10:13:37.089941 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:37 crc kubenswrapper[4984]: E0130 10:13:37.090018 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:37 crc kubenswrapper[4984]: E0130 10:13:37.090111 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:37 crc kubenswrapper[4984]: E0130 10:13:37.090311 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:37 crc kubenswrapper[4984]: E0130 10:13:37.090472 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:39 crc kubenswrapper[4984]: I0130 10:13:39.089723 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:39 crc kubenswrapper[4984]: I0130 10:13:39.089753 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:39 crc kubenswrapper[4984]: I0130 10:13:39.089758 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:39 crc kubenswrapper[4984]: E0130 10:13:39.089849 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:39 crc kubenswrapper[4984]: I0130 10:13:39.089739 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:39 crc kubenswrapper[4984]: E0130 10:13:39.089970 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:39 crc kubenswrapper[4984]: E0130 10:13:39.090318 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:39 crc kubenswrapper[4984]: E0130 10:13:39.090915 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:41 crc kubenswrapper[4984]: I0130 10:13:41.089581 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:41 crc kubenswrapper[4984]: I0130 10:13:41.089660 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:41 crc kubenswrapper[4984]: I0130 10:13:41.089611 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.089798 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.089923 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.090041 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:41 crc kubenswrapper[4984]: I0130 10:13:41.089753 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.090162 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.183840 4984 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 10:13:43 crc kubenswrapper[4984]: I0130 10:13:43.090204 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:43 crc kubenswrapper[4984]: I0130 10:13:43.090226 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:43 crc kubenswrapper[4984]: E0130 10:13:43.090696 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:43 crc kubenswrapper[4984]: I0130 10:13:43.090366 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:43 crc kubenswrapper[4984]: I0130 10:13:43.090324 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:43 crc kubenswrapper[4984]: E0130 10:13:43.090791 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:43 crc kubenswrapper[4984]: E0130 10:13:43.090753 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:43 crc kubenswrapper[4984]: E0130 10:13:43.090989 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:44 crc kubenswrapper[4984]: I0130 10:13:44.090182 4984 scope.go:117] "RemoveContainer" containerID="d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2" Jan 30 10:13:44 crc kubenswrapper[4984]: I0130 10:13:44.750877 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/1.log" Jan 30 10:13:44 crc kubenswrapper[4984]: I0130 10:13:44.752042 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac"} Jan 30 10:13:44 crc kubenswrapper[4984]: I0130 10:13:44.778344 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bnkpj" podStartSLOduration=108.778321736 podStartE2EDuration="1m48.778321736s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:44.776646688 +0000 UTC m=+129.342950512" watchObservedRunningTime="2026-01-30 10:13:44.778321736 +0000 UTC m=+129.344625560" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.089765 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.089778 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.089960 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.089795 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.090107 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.090367 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.090719 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.090880 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.091235 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.756693 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.758969 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.759625 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.794658 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podStartSLOduration=109.794641128 podStartE2EDuration="1m49.794641128s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:45.792789385 +0000 UTC m=+130.359093269" watchObservedRunningTime="2026-01-30 10:13:45.794641128 +0000 UTC m=+130.360944952" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.970599 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sdmkd"] Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.970726 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.970856 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:46 crc kubenswrapper[4984]: E0130 10:13:46.185328 4984 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 10:13:47 crc kubenswrapper[4984]: I0130 10:13:47.090037 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:47 crc kubenswrapper[4984]: I0130 10:13:47.090098 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:47 crc kubenswrapper[4984]: E0130 10:13:47.090215 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:47 crc kubenswrapper[4984]: E0130 10:13:47.090409 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:47 crc kubenswrapper[4984]: I0130 10:13:47.090990 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:47 crc kubenswrapper[4984]: E0130 10:13:47.091201 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:48 crc kubenswrapper[4984]: I0130 10:13:48.089590 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:48 crc kubenswrapper[4984]: E0130 10:13:48.089838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:49 crc kubenswrapper[4984]: I0130 10:13:49.089609 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:49 crc kubenswrapper[4984]: I0130 10:13:49.089669 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:49 crc kubenswrapper[4984]: I0130 10:13:49.089778 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:49 crc kubenswrapper[4984]: E0130 10:13:49.089938 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:49 crc kubenswrapper[4984]: E0130 10:13:49.090078 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:49 crc kubenswrapper[4984]: E0130 10:13:49.090376 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:50 crc kubenswrapper[4984]: I0130 10:13:50.089092 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:50 crc kubenswrapper[4984]: E0130 10:13:50.089269 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:51 crc kubenswrapper[4984]: I0130 10:13:51.089186 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:51 crc kubenswrapper[4984]: I0130 10:13:51.089235 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:51 crc kubenswrapper[4984]: I0130 10:13:51.089195 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:51 crc kubenswrapper[4984]: E0130 10:13:51.089512 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:51 crc kubenswrapper[4984]: E0130 10:13:51.089411 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:51 crc kubenswrapper[4984]: E0130 10:13:51.089691 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:52 crc kubenswrapper[4984]: I0130 10:13:52.089556 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:52 crc kubenswrapper[4984]: I0130 10:13:52.092741 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 10:13:52 crc kubenswrapper[4984]: I0130 10:13:52.093535 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.089464 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.089568 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.089568 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.093859 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.094121 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.098722 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.103146 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.505163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.553040 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.553379 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.555550 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fzff9"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.555898 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.560490 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b9k4d"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.560903 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.561979 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.562156 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.567612 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.567686 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568038 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568231 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568544 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568947 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: W0130 10:13:55.569040 4984 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 30 10:13:55 crc kubenswrapper[4984]: E0130 10:13:55.569064 4984 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568957 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b8xqj"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569395 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569448 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569573 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569778 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569997 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570106 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570119 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570206 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570266 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570366 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.574761 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.576553 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.578856 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.579462 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.586240 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587165 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587336 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587789 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587943 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587992 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z7s9j"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.591930 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.595864 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.596120 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.596318 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.603950 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.604208 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.604639 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mtldg"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.605116 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.605172 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.622452 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.622642 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.622898 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.623130 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.623571 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.624333 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.624903 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.624967 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625126 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625325 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625479 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625507 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-47sww"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625964 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.626305 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.626569 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.627180 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.627749 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633075 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633449 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633664 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633750 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsbnt\" (UniqueName: \"kubernetes.io/projected/218f0398-9175-448b-83b8-6445e2c3df37-kube-api-access-dsbnt\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633804 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c7a47a-7861-4e43-b3f8-a187fc65f041-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633959 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634082 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634236 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634086 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634239 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634089 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634578 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634712 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-serving-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634821 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634907 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634954 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gptt6\" (UniqueName: \"kubernetes.io/projected/a5c7a47a-7861-4e43-b3f8-a187fc65f041-kube-api-access-gptt6\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634992 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608dec52-033b-4c24-9fbf-8fefe81621a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635026 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635060 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/689169b7-2cad-4763-9b8d-fdb50126ec69-metrics-tls\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635088 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635122 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5t8z\" (UniqueName: \"kubernetes.io/projected/608dec52-033b-4c24-9fbf-8fefe81621a9-kube-api-access-w5t8z\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635179 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-config\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635213 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635251 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-image-import-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635306 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635345 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63805acf-f9ac-4417-824f-6640f8836b3a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635385 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fe4b95-41f9-432d-b597-3941f219b7af-serving-cert\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635418 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-trusted-ca\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635452 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-images\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635490 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635526 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shl7r\" (UniqueName: \"kubernetes.io/projected/689169b7-2cad-4763-9b8d-fdb50126ec69-kube-api-access-shl7r\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635563 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-serving-cert\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635596 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24k4\" (UniqueName: \"kubernetes.io/projected/01fe4b95-41f9-432d-b597-3941f219b7af-kube-api-access-r24k4\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635627 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-service-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635663 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635720 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-node-pullsecrets\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635767 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c7a47a-7861-4e43-b3f8-a187fc65f041-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635801 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-encryption-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635837 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-config\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635885 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218f0398-9175-448b-83b8-6445e2c3df37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635917 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9vzb\" (UniqueName: \"kubernetes.io/projected/3cb637fe-7a94-4790-abf9-3beb38ecb8da-kube-api-access-x9vzb\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635948 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vtt7\" (UniqueName: \"kubernetes.io/projected/63805acf-f9ac-4417-824f-6640f8836b3a-kube-api-access-8vtt7\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635982 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636014 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmtbf\" (UniqueName: \"kubernetes.io/projected/5d031ce5-81d8-4a93-8ef6-a97a86e06195-kube-api-access-vmtbf\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636048 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-config\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636082 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit-dir\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636113 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63805acf-f9ac-4417-824f-6640f8836b3a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636152 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608dec52-033b-4c24-9fbf-8fefe81621a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636207 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d031ce5-81d8-4a93-8ef6-a97a86e06195-serving-cert\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636257 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636590 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636824 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636997 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.638368 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.638915 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.639322 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.639674 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.640595 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.640721 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.641395 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.641581 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.641772 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.641991 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.644401 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.644686 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.644921 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.645518 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.645553 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.645532 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.645776 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.646011 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.646567 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.646949 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.648766 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649298 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649397 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649557 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649625 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649665 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649609 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649802 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649888 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.651419 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.652182 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.653447 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.653992 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.654127 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.654269 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.656372 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.657948 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.658911 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.658929 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.670322 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.672522 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.672921 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.674235 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.677610 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.678376 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.678842 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.679361 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.679817 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.685880 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.686491 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.687029 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.687694 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.689139 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.689382 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.689563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.691353 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.691522 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.689592 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.692152 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jc8ph"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.692231 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.694797 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.694856 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.698700 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-b5gpb"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.698998 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700147 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700554 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700747 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700831 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700948 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700974 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700987 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.701096 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.701137 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.701228 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.701299 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.702885 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.703751 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.704363 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.704825 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.708736 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.708548 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.708600 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.709598 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.710177 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.710695 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zl47s"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.710811 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.710934 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.711310 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.711613 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.711641 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.712120 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.712709 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.713094 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j6cv2"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.713681 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714081 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714269 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714467 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714628 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714776 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.715156 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b8xqj"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.716295 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.716940 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.717519 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.718448 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.720127 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.720160 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b9k4d"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.724711 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.724762 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mtldg"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.724772 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.725421 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.732747 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.732799 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-k9xrn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.733514 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.736371 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tnwfs"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.738370 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.737315 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.740310 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-47sww"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.740734 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.741982 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742018 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/19fa971c-228f-4457-81be-b2d9220ce27f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742041 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7l5h\" (UniqueName: \"kubernetes.io/projected/19fa971c-228f-4457-81be-b2d9220ce27f-kube-api-access-d7l5h\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742079 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742098 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsbnt\" (UniqueName: \"kubernetes.io/projected/218f0398-9175-448b-83b8-6445e2c3df37-kube-api-access-dsbnt\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742117 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/477b0c18-df7c-46c8-bae3-d0dda1af580c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742134 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742147 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-config\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742164 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-serving-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742180 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.750375 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751030 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751076 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d267eea-0fb6-4471-89b8-0de23f0a5873-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751114 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751134 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751159 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gptt6\" (UniqueName: \"kubernetes.io/projected/a5c7a47a-7861-4e43-b3f8-a187fc65f041-kube-api-access-gptt6\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751183 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-cabundle\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751206 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751226 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751251 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751293 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/689169b7-2cad-4763-9b8d-fdb50126ec69-metrics-tls\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751314 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9e5765-1adb-417b-abbc-82c398a424a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751334 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751352 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751372 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d267eea-0fb6-4471-89b8-0de23f0a5873-proxy-tls\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751390 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh46g\" (UniqueName: \"kubernetes.io/projected/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-kube-api-access-hh46g\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751418 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2849d59-5121-45c3-bf3c-41c83a87827c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751437 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnt9\" (UniqueName: \"kubernetes.io/projected/8fb88289-55c4-4710-a8a2-293d430152db-kube-api-access-hpnt9\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751458 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751487 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751510 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fe4b95-41f9-432d-b597-3941f219b7af-serving-cert\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751516 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-serving-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751532 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-profile-collector-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751551 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b635e15-1e86-4142-8e1d-c26628aa2403-serving-cert\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751572 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751591 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751613 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwnzn\" (UniqueName: \"kubernetes.io/projected/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-kube-api-access-lwnzn\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751631 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751653 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knlj\" (UniqueName: \"kubernetes.io/projected/53f7d13c-e0e5-47cd-b819-8ad8e6e1e761-kube-api-access-4knlj\") pod \"downloads-7954f5f757-jc8ph\" (UID: \"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761\") " pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751675 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-serving-cert\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751695 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751712 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751733 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04dd150e-af11-495b-a44b-10cce42da55b-service-ca-bundle\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751763 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-service-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751785 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751807 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751842 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrbch\" (UniqueName: \"kubernetes.io/projected/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-kube-api-access-hrbch\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751865 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751864 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751883 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-node-pullsecrets\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751945 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751965 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751985 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/477b0c18-df7c-46c8-bae3-d0dda1af580c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752002 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752024 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c7a47a-7861-4e43-b3f8-a187fc65f041-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-encryption-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752572 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752599 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752625 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752650 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752734 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752779 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218f0398-9175-448b-83b8-6445e2c3df37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752801 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9vzb\" (UniqueName: \"kubernetes.io/projected/3cb637fe-7a94-4790-abf9-3beb38ecb8da-kube-api-access-x9vzb\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752820 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s66h\" (UniqueName: \"kubernetes.io/projected/bc2c9228-6181-419f-acdb-869007ac6f6c-kube-api-access-2s66h\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752845 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752866 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmtbf\" (UniqueName: \"kubernetes.io/projected/5d031ce5-81d8-4a93-8ef6-a97a86e06195-kube-api-access-vmtbf\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752890 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753003 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753032 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63805acf-f9ac-4417-824f-6640f8836b3a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753053 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753077 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608dec52-033b-4c24-9fbf-8fefe81621a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753094 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d031ce5-81d8-4a93-8ef6-a97a86e06195-serving-cert\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753113 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-metrics-certs\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753101 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753136 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdkfd\" (UniqueName: \"kubernetes.io/projected/74c9e5fc-e679-408d-ab8e-aab60ca942e9-kube-api-access-cdkfd\") pod \"migrator-59844c95c7-mbkzc\" (UID: \"74c9e5fc-e679-408d-ab8e-aab60ca942e9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753249 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbsm\" (UniqueName: \"kubernetes.io/projected/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-kube-api-access-tgbsm\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753294 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-default-certificate\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753371 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753395 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np9cq\" (UniqueName: \"kubernetes.io/projected/04dd150e-af11-495b-a44b-10cce42da55b-kube-api-access-np9cq\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753463 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdc71eba-e354-4963-967a-7e1c908467b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753524 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c7a47a-7861-4e43-b3f8-a187fc65f041-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753596 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753648 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753669 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-dir\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753729 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fa971c-228f-4457-81be-b2d9220ce27f-serving-cert\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753747 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753772 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.755052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-service-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.755147 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-node-pullsecrets\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.755501 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c7a47a-7861-4e43-b3f8-a187fc65f041-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.758012 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63805acf-f9ac-4417-824f-6640f8836b3a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.758930 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759063 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-encryption-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.758938 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759199 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759832 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759912 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ksbx\" (UniqueName: \"kubernetes.io/projected/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-kube-api-access-7ksbx\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759947 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759989 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-trusted-ca\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760022 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608dec52-033b-4c24-9fbf-8fefe81621a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760073 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760099 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnnft\" (UniqueName: \"kubernetes.io/projected/41fed1a2-7c34-4363-bad0-ac0740961cad-kube-api-access-vnnft\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760176 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760221 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-serving-cert\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760308 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760474 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608dec52-033b-4c24-9fbf-8fefe81621a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760504 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760554 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5t8z\" (UniqueName: \"kubernetes.io/projected/608dec52-033b-4c24-9fbf-8fefe81621a9-kube-api-access-w5t8z\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760671 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3294dd98-dfda-4f40-bdd8-ad0b8932432d-machine-approver-tls\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760720 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/689169b7-2cad-4763-9b8d-fdb50126ec69-metrics-tls\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760750 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-stats-auth\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760778 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760799 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-config\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760990 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761072 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-image-import-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761130 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc71eba-e354-4963-967a-7e1c908467b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761185 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761237 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761601 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63805acf-f9ac-4417-824f-6640f8836b3a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761661 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssc7c\" (UniqueName: \"kubernetes.io/projected/3294dd98-dfda-4f40-bdd8-ad0b8932432d-kube-api-access-ssc7c\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761678 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-config\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761942 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762096 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762165 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-images\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762230 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/91c03f30-b334-480b-937d-15b6d0b493a7-proxy-tls\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762351 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-trusted-ca\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762465 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-images\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762535 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-image-import-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762696 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762823 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762916 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-config\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763092 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shl7r\" (UniqueName: \"kubernetes.io/projected/689169b7-2cad-4763-9b8d-fdb50126ec69-kube-api-access-shl7r\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763191 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763201 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763400 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f9e5765-1adb-417b-abbc-82c398a424a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763609 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763704 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608dec52-033b-4c24-9fbf-8fefe81621a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763928 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r24k4\" (UniqueName: \"kubernetes.io/projected/01fe4b95-41f9-432d-b597-3941f219b7af-kube-api-access-r24k4\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763983 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fe4b95-41f9-432d-b597-3941f219b7af-serving-cert\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764168 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-images\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764248 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764050 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56gn\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-kube-api-access-d56gn\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764326 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-tmpfs\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764361 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764398 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e5765-1adb-417b-abbc-82c398a424a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764417 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764455 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-serving-cert\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764516 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-service-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764561 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49ll7\" (UniqueName: \"kubernetes.io/projected/7b635e15-1e86-4142-8e1d-c26628aa2403-kube-api-access-49ll7\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764579 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvl9r\" (UniqueName: \"kubernetes.io/projected/91c03f30-b334-480b-937d-15b6d0b493a7-kube-api-access-nvl9r\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764613 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764792 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764817 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4h2p\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-kube-api-access-p4h2p\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764833 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764854 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764895 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-config\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764932 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b635e15-1e86-4142-8e1d-c26628aa2403-config\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764949 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfrd\" (UniqueName: \"kubernetes.io/projected/1d267eea-0fb6-4471-89b8-0de23f0a5873-kube-api-access-9mfrd\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764970 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vtt7\" (UniqueName: \"kubernetes.io/projected/63805acf-f9ac-4417-824f-6640f8836b3a-kube-api-access-8vtt7\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765058 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhdv\" (UniqueName: \"kubernetes.io/projected/a2849d59-5121-45c3-bf3c-41c83a87827c-kube-api-access-gnhdv\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765185 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-srv-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765350 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-config\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765378 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765399 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit-dir\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-metrics-tls\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765462 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765485 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-client\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765623 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d031ce5-81d8-4a93-8ef6-a97a86e06195-serving-cert\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765622 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-config\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765660 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit-dir\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765717 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-auth-proxy-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765735 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.766029 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-config\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.766489 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c7a47a-7861-4e43-b3f8-a187fc65f041-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.766890 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-trusted-ca\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.767032 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fzff9"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.770094 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63805acf-f9ac-4417-824f-6640f8836b3a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.770599 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.770991 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218f0398-9175-448b-83b8-6445e2c3df37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.774195 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.775708 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.777167 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.777224 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z7s9j"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.778160 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tnwfs"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.779187 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.780276 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zl47s"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.782137 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.783575 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jc8ph"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.784228 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.786136 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.789795 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.792518 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.793544 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.794682 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j6cv2"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.795401 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.796802 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.798589 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.800167 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.801053 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.802451 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tm6bv"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.803204 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.803534 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.804529 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h9smt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.805529 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.806162 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.806754 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.808106 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tm6bv"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.812846 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h9smt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.826557 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.835685 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.855518 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866426 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866470 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866496 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866518 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9e5765-1adb-417b-abbc-82c398a424a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866541 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866568 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866589 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d267eea-0fb6-4471-89b8-0de23f0a5873-proxy-tls\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866615 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh46g\" (UniqueName: \"kubernetes.io/projected/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-kube-api-access-hh46g\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866638 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2849d59-5121-45c3-bf3c-41c83a87827c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866692 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpnt9\" (UniqueName: \"kubernetes.io/projected/8fb88289-55c4-4710-a8a2-293d430152db-kube-api-access-hpnt9\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866719 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866744 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866768 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866792 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866819 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwnzn\" (UniqueName: \"kubernetes.io/projected/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-kube-api-access-lwnzn\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866846 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-profile-collector-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866870 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b635e15-1e86-4142-8e1d-c26628aa2403-serving-cert\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866894 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866920 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4knlj\" (UniqueName: \"kubernetes.io/projected/53f7d13c-e0e5-47cd-b819-8ad8e6e1e761-kube-api-access-4knlj\") pod \"downloads-7954f5f757-jc8ph\" (UID: \"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761\") " pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866944 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866968 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866993 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04dd150e-af11-495b-a44b-10cce42da55b-service-ca-bundle\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867018 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867069 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867106 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrbch\" (UniqueName: \"kubernetes.io/projected/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-kube-api-access-hrbch\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867131 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867180 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/477b0c18-df7c-46c8-bae3-d0dda1af580c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867263 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867305 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867329 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867349 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867374 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867436 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867448 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9e5765-1adb-417b-abbc-82c398a424a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867491 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868439 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868480 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868488 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868513 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s66h\" (UniqueName: \"kubernetes.io/projected/bc2c9228-6181-419f-acdb-869007ac6f6c-kube-api-access-2s66h\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868559 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868596 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868645 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-metrics-certs\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868669 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868694 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdkfd\" (UniqueName: \"kubernetes.io/projected/74c9e5fc-e679-408d-ab8e-aab60ca942e9-kube-api-access-cdkfd\") pod \"migrator-59844c95c7-mbkzc\" (UID: \"74c9e5fc-e679-408d-ab8e-aab60ca942e9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868723 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgbsm\" (UniqueName: \"kubernetes.io/projected/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-kube-api-access-tgbsm\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868748 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-default-certificate\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868808 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868835 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np9cq\" (UniqueName: \"kubernetes.io/projected/04dd150e-af11-495b-a44b-10cce42da55b-kube-api-access-np9cq\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868858 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdc71eba-e354-4963-967a-7e1c908467b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868881 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868907 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-dir\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868942 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fa971c-228f-4457-81be-b2d9220ce27f-serving-cert\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868965 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869016 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ksbx\" (UniqueName: \"kubernetes.io/projected/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-kube-api-access-7ksbx\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869038 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869063 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-trusted-ca\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869076 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-dir\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869085 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869109 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-serving-cert\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869140 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnnft\" (UniqueName: \"kubernetes.io/projected/41fed1a2-7c34-4363-bad0-ac0740961cad-kube-api-access-vnnft\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869167 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869190 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869225 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3294dd98-dfda-4f40-bdd8-ad0b8932432d-machine-approver-tls\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869256 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-stats-auth\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869301 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc71eba-e354-4963-967a-7e1c908467b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869338 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssc7c\" (UniqueName: \"kubernetes.io/projected/3294dd98-dfda-4f40-bdd8-ad0b8932432d-kube-api-access-ssc7c\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869037 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/477b0c18-df7c-46c8-bae3-d0dda1af580c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869362 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869390 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870209 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-images\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/91c03f30-b334-480b-937d-15b6d0b493a7-proxy-tls\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870256 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-config\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870303 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f9e5765-1adb-417b-abbc-82c398a424a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870329 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870364 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56gn\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-kube-api-access-d56gn\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870388 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-tmpfs\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870014 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-trusted-ca\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871076 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-tmpfs\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871095 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-images\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871162 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871192 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871218 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-service-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871231 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871251 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e5765-1adb-417b-abbc-82c398a424a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871249 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b635e15-1e86-4142-8e1d-c26628aa2403-serving-cert\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871303 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49ll7\" (UniqueName: \"kubernetes.io/projected/7b635e15-1e86-4142-8e1d-c26628aa2403-kube-api-access-49ll7\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871359 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvl9r\" (UniqueName: \"kubernetes.io/projected/91c03f30-b334-480b-937d-15b6d0b493a7-kube-api-access-nvl9r\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871386 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871408 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871432 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4h2p\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-kube-api-access-p4h2p\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871451 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871409 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871470 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871487 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfrd\" (UniqueName: \"kubernetes.io/projected/1d267eea-0fb6-4471-89b8-0de23f0a5873-kube-api-access-9mfrd\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871508 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b635e15-1e86-4142-8e1d-c26628aa2403-config\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871532 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhdv\" (UniqueName: \"kubernetes.io/projected/a2849d59-5121-45c3-bf3c-41c83a87827c-kube-api-access-gnhdv\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871548 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-srv-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871566 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871570 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2849d59-5121-45c3-bf3c-41c83a87827c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871584 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-client\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871602 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-metrics-tls\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871620 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-auth-proxy-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871664 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/19fa971c-228f-4457-81be-b2d9220ce27f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871682 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871701 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7l5h\" (UniqueName: \"kubernetes.io/projected/19fa971c-228f-4457-81be-b2d9220ce27f-kube-api-access-d7l5h\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871719 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871735 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871755 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-config\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871782 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/477b0c18-df7c-46c8-bae3-d0dda1af580c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871825 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871848 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871872 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d267eea-0fb6-4471-89b8-0de23f0a5873-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871897 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871921 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-cabundle\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872012 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-service-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872201 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b635e15-1e86-4142-8e1d-c26628aa2403-config\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872787 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-config\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872828 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-serving-cert\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872892 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.873379 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3294dd98-dfda-4f40-bdd8-ad0b8932432d-machine-approver-tls\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.873667 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-profile-collector-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.873862 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/19fa971c-228f-4457-81be-b2d9220ce27f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.874049 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e5765-1adb-417b-abbc-82c398a424a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.874058 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.874155 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-auth-proxy-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.874959 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d267eea-0fb6-4471-89b8-0de23f0a5873-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.875663 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-client\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.876043 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.876090 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-metrics-tls\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.876481 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/477b0c18-df7c-46c8-bae3-d0dda1af580c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.876898 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.877620 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-srv-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.879540 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/91c03f30-b334-480b-937d-15b6d0b493a7-proxy-tls\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.896056 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.907039 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.916121 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.932022 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.935442 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.944806 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.961177 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.965053 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.975772 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.980857 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.995891 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.005394 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.015828 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.017185 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.036298 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.038111 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.063669 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.073357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.075596 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.095399 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.102531 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fa971c-228f-4457-81be-b2d9220ce27f-serving-cert\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.115230 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.136106 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.156375 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.160820 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.175660 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.195220 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.225349 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.229506 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.236734 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.243809 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.255628 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.261884 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.275186 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.281269 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.295670 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.303321 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.315746 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.336310 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.376007 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.396015 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.402323 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-config\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.420362 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.436276 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.441312 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.456020 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.464635 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-default-certificate\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.476475 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.496097 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.504703 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-stats-auth\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.516430 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.523462 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-metrics-certs\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.536114 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.538935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04dd150e-af11-495b-a44b-10cce42da55b-service-ca-bundle\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.556299 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.576241 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.596659 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.604475 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.616901 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.635934 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.641715 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.655870 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.657756 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.676147 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.695615 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.714303 4984 request.go:700] Waited for 1.002682546s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.716370 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.736144 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.740292 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d267eea-0fb6-4471-89b8-0de23f0a5873-proxy-tls\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.757662 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.761561 4984 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.761663 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client podName:3cb637fe-7a94-4790-abf9-3beb38ecb8da nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.261635432 +0000 UTC m=+141.827939266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client") pod "apiserver-76f77b778f-fzff9" (UID: "3cb637fe-7a94-4790-abf9-3beb38ecb8da") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.763659 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-cabundle\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.775767 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.782645 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.783159 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.795259 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.816428 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.836154 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.856157 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.863618 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc71eba-e354-4963-967a-7e1c908467b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867592 4984 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867734 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs podName:41fed1a2-7c34-4363-bad0-ac0740961cad nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.367716562 +0000 UTC m=+141.934020386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs") pod "multus-admission-controller-857f4d67dd-j6cv2" (UID: "41fed1a2-7c34-4363-bad0-ac0740961cad") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867597 4984 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867942 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert podName:50a9dda1-acf5-471f-a6cd-46e77a1dfa24 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.367929719 +0000 UTC m=+141.934233543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert") pod "catalog-operator-68c6474976-6kcss" (UID: "50a9dda1-acf5-471f-a6cd-46e77a1dfa24") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867632 4984 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868146 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume podName:fbdde9dd-69cf-405d-9143-1739e3acbdde nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368135456 +0000 UTC m=+141.934439280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume") pod "collect-profiles-29496120-p5sk8" (UID: "fbdde9dd-69cf-405d-9143-1739e3acbdde") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868208 4984 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868239 4984 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868321 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368301572 +0000 UTC m=+141.934605386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868358 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368335663 +0000 UTC m=+141.934639527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867661 4984 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868207 4984 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868421 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368406935 +0000 UTC m=+141.934710779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868439 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368431496 +0000 UTC m=+141.934735420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868803 4984 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868928 4984 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868936 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca podName:b92a67bb-8407-4e47-9d9a-9d15398d90ed nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368901112 +0000 UTC m=+141.935204956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca") pod "marketplace-operator-79b997595-9lf7j" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868959 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics podName:b92a67bb-8407-4e47-9d9a-9d15398d90ed nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368952714 +0000 UTC m=+141.935256538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics") pod "marketplace-operator-79b997595-9lf7j" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.870003 4984 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.870085 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.370064472 +0000 UTC m=+141.936368336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.871657 4984 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.871698 4984 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.871720 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key podName:a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.371700677 +0000 UTC m=+141.938004531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key") pod "service-ca-9c57cc56f-zl47s" (UID: "a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.871768 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config podName:fdc71eba-e354-4963-967a-7e1c908467b5 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.371753989 +0000 UTC m=+141.938057813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config") pod "kube-apiserver-operator-766d6c64bb-vqg9w" (UID: "fdc71eba-e354-4963-967a-7e1c908467b5") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.872590 4984 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.872713 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls podName:3c2dcd5a-96f0-48ff-a004-9764d24b66b1 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.372700461 +0000 UTC m=+141.939004345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-g5m7t" (UID: "3c2dcd5a-96f0-48ff-a004-9764d24b66b1") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.872602 4984 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.872905 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.372894318 +0000 UTC m=+141.939198212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.875111 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.895174 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.916189 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.935520 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.955200 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.974882 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.995350 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.015870 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.035234 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.061718 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.075994 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.095627 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.116350 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.136541 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.156488 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.175518 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.196900 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.215917 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.236858 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.255869 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.277389 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.295792 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.301005 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.316178 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.336367 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.357782 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.375844 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.402768 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.403076 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.403267 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.403542 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.403988 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404150 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404408 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404618 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404797 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.405110 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.405169 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.405965 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.405230 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.406210 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.406306 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.406406 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.406840 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.407035 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.407180 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.413961 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.414647 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.414749 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.415505 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.417119 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.417586 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.417769 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.417931 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.420523 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.436482 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.455572 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.475791 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.495413 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.538515 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsbnt\" (UniqueName: \"kubernetes.io/projected/218f0398-9175-448b-83b8-6445e2c3df37-kube-api-access-dsbnt\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.554142 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gptt6\" (UniqueName: \"kubernetes.io/projected/a5c7a47a-7861-4e43-b3f8-a187fc65f041-kube-api-access-gptt6\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.579598 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9vzb\" (UniqueName: \"kubernetes.io/projected/3cb637fe-7a94-4790-abf9-3beb38ecb8da-kube-api-access-x9vzb\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.599962 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmtbf\" (UniqueName: \"kubernetes.io/projected/5d031ce5-81d8-4a93-8ef6-a97a86e06195-kube-api-access-vmtbf\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.613670 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5t8z\" (UniqueName: \"kubernetes.io/projected/608dec52-033b-4c24-9fbf-8fefe81621a9-kube-api-access-w5t8z\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.630853 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.633244 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.635477 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.678461 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shl7r\" (UniqueName: \"kubernetes.io/projected/689169b7-2cad-4763-9b8d-fdb50126ec69-kube-api-access-shl7r\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.698467 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vtt7\" (UniqueName: \"kubernetes.io/projected/63805acf-f9ac-4417-824f-6640f8836b3a-kube-api-access-8vtt7\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.705227 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.715574 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.717674 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r24k4\" (UniqueName: \"kubernetes.io/projected/01fe4b95-41f9-432d-b597-3941f219b7af-kube-api-access-r24k4\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.733849 4984 request.go:700] Waited for 1.930258294s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.735899 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.755965 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.758292 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.775774 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.780570 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.794652 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.795989 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.816051 4984 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.820544 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.831960 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2"] Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.838756 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.854626 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:57 crc kubenswrapper[4984]: W0130 10:13:57.878808 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod608dec52_033b_4c24_9fbf_8fefe81621a9.slice/crio-bb29d15b5cdf9d1b89a8e452f139f98d4745a5989db31badf550389319f31570 WatchSource:0}: Error finding container bb29d15b5cdf9d1b89a8e452f139f98d4745a5989db31badf550389319f31570: Status 404 returned error can't find the container with id bb29d15b5cdf9d1b89a8e452f139f98d4745a5989db31badf550389319f31570 Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.880154 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh46g\" (UniqueName: \"kubernetes.io/projected/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-kube-api-access-hh46g\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.896260 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.897096 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.901778 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.920539 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpnt9\" (UniqueName: \"kubernetes.io/projected/8fb88289-55c4-4710-a8a2-293d430152db-kube-api-access-hpnt9\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.931756 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4knlj\" (UniqueName: \"kubernetes.io/projected/53f7d13c-e0e5-47cd-b819-8ad8e6e1e761-kube-api-access-4knlj\") pod \"downloads-7954f5f757-jc8ph\" (UID: \"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761\") " pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.939550 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.949716 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b9k4d"] Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.959148 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrbch\" (UniqueName: \"kubernetes.io/projected/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-kube-api-access-hrbch\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.971105 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.989998 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwnzn\" (UniqueName: \"kubernetes.io/projected/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-kube-api-access-lwnzn\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.013286 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.034541 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s66h\" (UniqueName: \"kubernetes.io/projected/bc2c9228-6181-419f-acdb-869007ac6f6c-kube-api-access-2s66h\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.048212 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.057624 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdkfd\" (UniqueName: \"kubernetes.io/projected/74c9e5fc-e679-408d-ab8e-aab60ca942e9-kube-api-access-cdkfd\") pod \"migrator-59844c95c7-mbkzc\" (UID: \"74c9e5fc-e679-408d-ab8e-aab60ca942e9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.070126 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.079344 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgbsm\" (UniqueName: \"kubernetes.io/projected/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-kube-api-access-tgbsm\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.091991 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np9cq\" (UniqueName: \"kubernetes.io/projected/04dd150e-af11-495b-a44b-10cce42da55b-kube-api-access-np9cq\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.101797 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.114217 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdc71eba-e354-4963-967a-7e1c908467b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.117784 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.127669 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.133422 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.143607 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.153537 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.156365 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.181196 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mtldg"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.181418 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ksbx\" (UniqueName: \"kubernetes.io/projected/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-kube-api-access-7ksbx\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.186684 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.193780 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnnft\" (UniqueName: \"kubernetes.io/projected/41fed1a2-7c34-4363-bad0-ac0740961cad-kube-api-access-vnnft\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.212495 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.215442 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.215512 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb88289_55c4_4710_a8a2_293d430152db.slice/crio-46e0e0ba610aca51e6fd2e8b1229c2cdca45e118274f02854866d335f1d1db5c WatchSource:0}: Error finding container 46e0e0ba610aca51e6fd2e8b1229c2cdca45e118274f02854866d335f1d1db5c: Status 404 returned error can't find the container with id 46e0e0ba610aca51e6fd2e8b1229c2cdca45e118274f02854866d335f1d1db5c Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.233396 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssc7c\" (UniqueName: \"kubernetes.io/projected/3294dd98-dfda-4f40-bdd8-ad0b8932432d-kube-api-access-ssc7c\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.243703 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.248513 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.255867 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f9e5765-1adb-417b-abbc-82c398a424a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.258464 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63805acf_f9ac_4417_824f_6640f8836b3a.slice/crio-61ac4c7af215aad67dbbb43683d3af25089af34bdac135a1617aa4d06d7af049 WatchSource:0}: Error finding container 61ac4c7af215aad67dbbb43683d3af25089af34bdac135a1617aa4d06d7af049: Status 404 returned error can't find the container with id 61ac4c7af215aad67dbbb43683d3af25089af34bdac135a1617aa4d06d7af049 Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.266224 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.272069 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56gn\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-kube-api-access-d56gn\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.276332 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jc8ph"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.300041 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.301602 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.302236 4984 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.302304 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client podName:3cb637fe-7a94-4790-abf9-3beb38ecb8da nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.302284736 +0000 UTC m=+143.868588550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client") pod "apiserver-76f77b778f-fzff9" (UID: "3cb637fe-7a94-4790-abf9-3beb38ecb8da") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.331605 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.334496 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z7s9j"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.336177 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.336908 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49ll7\" (UniqueName: \"kubernetes.io/projected/7b635e15-1e86-4142-8e1d-c26628aa2403-kube-api-access-49ll7\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.341368 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.350078 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b8xqj"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.353538 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.353857 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53f7d13c_e0e5_47cd_b819_8ad8e6e1e761.slice/crio-848bcfe99f540e233dccc76ed1243ebcafe87c4fee1a88a087b0a905174f90af WatchSource:0}: Error finding container 848bcfe99f540e233dccc76ed1243ebcafe87c4fee1a88a087b0a905174f90af: Status 404 returned error can't find the container with id 848bcfe99f540e233dccc76ed1243ebcafe87c4fee1a88a087b0a905174f90af Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.355880 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvl9r\" (UniqueName: \"kubernetes.io/projected/91c03f30-b334-480b-937d-15b6d0b493a7-kube-api-access-nvl9r\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.356841 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.363292 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.372715 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.378532 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfrd\" (UniqueName: \"kubernetes.io/projected/1d267eea-0fb6-4471-89b8-0de23f0a5873-kube-api-access-9mfrd\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.378808 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.388556 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.392709 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.399249 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.403442 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4h2p\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-kube-api-access-p4h2p\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.408639 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.422618 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhdv\" (UniqueName: \"kubernetes.io/projected/a2849d59-5121-45c3-bf3c-41c83a87827c-kube-api-access-gnhdv\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.434624 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.441324 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7l5h\" (UniqueName: \"kubernetes.io/projected/19fa971c-228f-4457-81be-b2d9220ce27f-kube-api-access-d7l5h\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.450524 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d031ce5_81d8_4a93_8ef6_a97a86e06195.slice/crio-c5c58fba3dc8a27999cdea93d737255746a9645e9fcc439a62b9a491bf63e7e9 WatchSource:0}: Error finding container c5c58fba3dc8a27999cdea93d737255746a9645e9fcc439a62b9a491bf63e7e9: Status 404 returned error can't find the container with id c5c58fba3dc8a27999cdea93d737255746a9645e9fcc439a62b9a491bf63e7e9 Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.457581 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.460574 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.506108 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.506306 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.508215 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-47sww"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.522115 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.533977 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534044 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534316 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534347 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534408 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fd0694-7375-4f0f-8cf1-84af752803b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534436 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534486 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkk4\" (UniqueName: \"kubernetes.io/projected/b8fd0694-7375-4f0f-8cf1-84af752803b6-kube-api-access-lxkk4\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534521 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534541 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534582 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.534867 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.034854888 +0000 UTC m=+143.601158762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.552610 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.559654 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.592110 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc"] Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.601552 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2c9228_6181_419f_acdb_869007ac6f6c.slice/crio-08a43da6bfd4c26b0a4b2351aaadeb6018ef827800a32c3c99211656eaed2063 WatchSource:0}: Error finding container 08a43da6bfd4c26b0a4b2351aaadeb6018ef827800a32c3c99211656eaed2063: Status 404 returned error can't find the container with id 08a43da6bfd4c26b0a4b2351aaadeb6018ef827800a32c3c99211656eaed2063 Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.617925 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.639160 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.639811 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640038 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640129 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-mountpoint-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640250 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640345 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292w5\" (UniqueName: \"kubernetes.io/projected/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-kube-api-access-292w5\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640395 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-registration-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640521 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fd0694-7375-4f0f-8cf1-84af752803b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640547 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-config-volume\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.641401 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.641501 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.641738 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9a01c47-eab2-4990-a659-a1f15a8176dd-cert\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.641875 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-csi-data-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.642066 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxkk4\" (UniqueName: \"kubernetes.io/projected/b8fd0694-7375-4f0f-8cf1-84af752803b6-kube-api-access-lxkk4\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.642134 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.642174 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.643484 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.143465054 +0000 UTC m=+143.709768958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.642196 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rbf\" (UniqueName: \"kubernetes.io/projected/e9a01c47-eab2-4990-a659-a1f15a8176dd-kube-api-access-c5rbf\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.645227 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.645440 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-certs\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.646118 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8dl\" (UniqueName: \"kubernetes.io/projected/48ae7d4f-38b1-40c0-ad61-815992265930-kube-api-access-ch8dl\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.646349 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-node-bootstrap-token\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.646424 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-metrics-tls\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.648679 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.14866666 +0000 UTC m=+143.714970484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.652324 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlt5b\" (UniqueName: \"kubernetes.io/projected/5f7fb8a3-2517-48bf-9a10-82725a7391cb-kube-api-access-xlt5b\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.652406 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-plugins-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.652622 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-socket-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.652912 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.653336 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.657572 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.666454 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.668422 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fd0694-7375-4f0f-8cf1-84af752803b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.679808 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.698994 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.714555 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxkk4\" (UniqueName: \"kubernetes.io/projected/b8fd0694-7375-4f0f-8cf1-84af752803b6-kube-api-access-lxkk4\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.722877 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.724759 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.732596 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.758642 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762255 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-config-volume\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762334 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9a01c47-eab2-4990-a659-a1f15a8176dd-cert\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762393 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-csi-data-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762471 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rbf\" (UniqueName: \"kubernetes.io/projected/e9a01c47-eab2-4990-a659-a1f15a8176dd-kube-api-access-c5rbf\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762634 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-certs\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762680 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch8dl\" (UniqueName: \"kubernetes.io/projected/48ae7d4f-38b1-40c0-ad61-815992265930-kube-api-access-ch8dl\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762722 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-node-bootstrap-token\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762758 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-metrics-tls\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762787 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlt5b\" (UniqueName: \"kubernetes.io/projected/5f7fb8a3-2517-48bf-9a10-82725a7391cb-kube-api-access-xlt5b\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763162 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-plugins-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763199 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-socket-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763252 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-mountpoint-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763348 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-registration-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763379 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292w5\" (UniqueName: \"kubernetes.io/projected/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-kube-api-access-292w5\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.785596 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-plugins-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.785697 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-socket-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.785757 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-mountpoint-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.785843 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-registration-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.786341 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.286321562 +0000 UTC m=+143.852625386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.788405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-certs\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.790578 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-metrics-tls\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.790811 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-csi-data-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.791326 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.798017 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-config-volume\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.800299 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9a01c47-eab2-4990-a659-a1f15a8176dd-cert\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.802383 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch8dl\" (UniqueName: \"kubernetes.io/projected/48ae7d4f-38b1-40c0-ad61-815992265930-kube-api-access-ch8dl\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.808311 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-node-bootstrap-token\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.811881 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.824912 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlt5b\" (UniqueName: \"kubernetes.io/projected/5f7fb8a3-2517-48bf-9a10-82725a7391cb-kube-api-access-xlt5b\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.830020 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292w5\" (UniqueName: \"kubernetes.io/projected/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-kube-api-access-292w5\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.850674 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" event={"ID":"8fb88289-55c4-4710-a8a2-293d430152db","Type":"ContainerStarted","Data":"4fca571edf5f9718dc5f0396111c559d14291ec33ffb072b810473ab7ddb77fa"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.850749 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" event={"ID":"8fb88289-55c4-4710-a8a2-293d430152db","Type":"ContainerStarted","Data":"46e0e0ba610aca51e6fd2e8b1229c2cdca45e118274f02854866d335f1d1db5c"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.852486 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.860124 4984 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k55nt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.860208 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" podUID="8fb88289-55c4-4710-a8a2-293d430152db" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.861133 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rbf\" (UniqueName: \"kubernetes.io/projected/e9a01c47-eab2-4990-a659-a1f15a8176dd-kube-api-access-c5rbf\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.865347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.865727 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.365708846 +0000 UTC m=+143.932012670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.875104 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" event={"ID":"01fe4b95-41f9-432d-b597-3941f219b7af","Type":"ContainerStarted","Data":"a57a2c30691a6546a05303d30cd231a0294ddbcd0742bd77474e4dcebb493e1a"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.884301 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" event={"ID":"3294dd98-dfda-4f40-bdd8-ad0b8932432d","Type":"ContainerStarted","Data":"41abe91d6377d1be83ade8ce7a07b7a076924bbd33ef3cef1a1710510f6cb9b3"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.890371 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" event={"ID":"bc2c9228-6181-419f-acdb-869007ac6f6c","Type":"ContainerStarted","Data":"08a43da6bfd4c26b0a4b2351aaadeb6018ef827800a32c3c99211656eaed2063"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.892035 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jc8ph" event={"ID":"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761","Type":"ContainerStarted","Data":"848bcfe99f540e233dccc76ed1243ebcafe87c4fee1a88a087b0a905174f90af"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.894666 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.894700 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.894763 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.899108 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" event={"ID":"a5c7a47a-7861-4e43-b3f8-a187fc65f041","Type":"ContainerStarted","Data":"3bc8f0953d72545b60497ae94ad96ff16d89c4dcdc37a3b078ced73ce53e51bf"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.912232 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" event={"ID":"f934f289-4896-49e7-b0ad-12222ed44137","Type":"ContainerStarted","Data":"5d2a7595aa7be4a2d24c3db3a03ceede193b8f38eb6567b569e38559c698d2a9"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.912527 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.915021 4984 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-v6xww container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.915070 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.919057 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.934849 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" event={"ID":"f03e3054-ba21-45c6-8cbd-786eb7eac685","Type":"ContainerStarted","Data":"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.934905 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" event={"ID":"f03e3054-ba21-45c6-8cbd-786eb7eac685","Type":"ContainerStarted","Data":"ea7973a6b7aeb56d77b3657c44c45b40105b1dffec897b668fde3fd406ab2c03"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.936076 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.940221 4984 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5sdnz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.940636 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.948622 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" event={"ID":"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0","Type":"ContainerStarted","Data":"103528758bdcda2daf4733d5f787a5ab02533f96f8b25c345c3381791d91f8b3"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.958241 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" event={"ID":"50a9dda1-acf5-471f-a6cd-46e77a1dfa24","Type":"ContainerStarted","Data":"582d5ceddd62e3e6c034668bfc56e1c1dc891d35188ba5cb001d91b7ec29fc9a"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.966560 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.967839 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.467820221 +0000 UTC m=+144.034124045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.006541 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" event={"ID":"5d031ce5-81d8-4a93-8ef6-a97a86e06195","Type":"ContainerStarted","Data":"c5c58fba3dc8a27999cdea93d737255746a9645e9fcc439a62b9a491bf63e7e9"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.009229 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" event={"ID":"74c9e5fc-e679-408d-ab8e-aab60ca942e9","Type":"ContainerStarted","Data":"cce9b5b312fa895c5861cec669487a87ae27962623bf7508ff3af357237da0cb"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.010291 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" event={"ID":"549b3b6c-e68d-4da4-8780-643fdbf7e4c9","Type":"ContainerStarted","Data":"c555d93237ec1b9fa5e2f685eab7146593879790d731765baa1d6201c0c59e26"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.040978 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" event={"ID":"218f0398-9175-448b-83b8-6445e2c3df37","Type":"ContainerStarted","Data":"56bb67d67f6487bc2ce5657ae521b69cbba4c9372c5640fa7fe5c05e84b7b5a3"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.041021 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" event={"ID":"218f0398-9175-448b-83b8-6445e2c3df37","Type":"ContainerStarted","Data":"b1612a29cce8d1a4b851623ccd75f2ac8d02fc765bd405a9e59595d860ac3506"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.041031 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" event={"ID":"218f0398-9175-448b-83b8-6445e2c3df37","Type":"ContainerStarted","Data":"01285bf4262939096b0619a50d238d6faaa4374bfa3ad0d25135d4d6f8d97ac2"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.043068 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" event={"ID":"608dec52-033b-4c24-9fbf-8fefe81621a9","Type":"ContainerStarted","Data":"60d289f9860aea5183462739badf3ac4bc95dad32afa678ddd17fcc98fbf4373"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.043120 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" event={"ID":"608dec52-033b-4c24-9fbf-8fefe81621a9","Type":"ContainerStarted","Data":"bb29d15b5cdf9d1b89a8e452f139f98d4745a5989db31badf550389319f31570"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.044604 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b5gpb" event={"ID":"04dd150e-af11-495b-a44b-10cce42da55b","Type":"ContainerStarted","Data":"acac6926e4a3d3ff4ffbe4df03513665fb9767d834e311a9014488a905873713"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.045823 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" event={"ID":"689169b7-2cad-4763-9b8d-fdb50126ec69","Type":"ContainerStarted","Data":"effcc05a29f40eb75b75ff1c10712983508ca82a309d510f8d5ef218e29d87ff"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.045854 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" event={"ID":"689169b7-2cad-4763-9b8d-fdb50126ec69","Type":"ContainerStarted","Data":"43a3851ce8e433b6771be4e20f604dcbd631cb222f1c933e9ca2cfd6d8718185"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.046936 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" event={"ID":"63805acf-f9ac-4417-824f-6640f8836b3a","Type":"ContainerStarted","Data":"2978e9cd3ccd49cfb0d09269903ee35442921ddc50767ac10039e61dcae52d09"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.046962 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" event={"ID":"63805acf-f9ac-4417-824f-6640f8836b3a","Type":"ContainerStarted","Data":"61ac4c7af215aad67dbbb43683d3af25089af34bdac135a1617aa4d06d7af049"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.068435 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.072611 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.072869 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.572853966 +0000 UTC m=+144.139157790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.091949 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.092174 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:59 crc kubenswrapper[4984]: W0130 10:13:59.112837 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f9e5765_1adb_417b_abbc_82c398a424a2.slice/crio-9648c6027ced892d4c955f72eeba24b8e0fd3b4dcf8fe20f9c7ee1b7ff7ba83c WatchSource:0}: Error finding container 9648c6027ced892d4c955f72eeba24b8e0fd3b4dcf8fe20f9c7ee1b7ff7ba83c: Status 404 returned error can't find the container with id 9648c6027ced892d4c955f72eeba24b8e0fd3b4dcf8fe20f9c7ee1b7ff7ba83c Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.126341 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.168066 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.169231 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.169506 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.669430323 +0000 UTC m=+144.235734147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.169838 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.171742 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.67170268 +0000 UTC m=+144.238006504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.186980 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.211498 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.221650 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.225932 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.240897 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.270679 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.273909 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zl47s"] Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.277056 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.774673255 +0000 UTC m=+144.340977079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.371797 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.372078 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.373262 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.873226139 +0000 UTC m=+144.439530023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.384462 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.413327 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.472661 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.473116 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.973083198 +0000 UTC m=+144.539387022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.473620 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.474242 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.974230697 +0000 UTC m=+144.540534521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.527886 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j6cv2"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.540796 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.574345 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.576111 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.076090544 +0000 UTC m=+144.642394368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: W0130 10:13:59.634602 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdc71eba_e354_4963_967a_7e1c908467b5.slice/crio-0699a016951b8f0efc2aa8997f0d6d40413693b73f802c855adceebdccee80aa WatchSource:0}: Error finding container 0699a016951b8f0efc2aa8997f0d6d40413693b73f802c855adceebdccee80aa: Status 404 returned error can't find the container with id 0699a016951b8f0efc2aa8997f0d6d40413693b73f802c855adceebdccee80aa Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.680743 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.681668 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.181652166 +0000 UTC m=+144.747956000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.761796 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.783243 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.783653 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.283635237 +0000 UTC m=+144.849939061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.886554 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.886884 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.38687228 +0000 UTC m=+144.953176104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.987885 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.988478 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.488463648 +0000 UTC m=+145.054767472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.066958 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.094203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.094637 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.594622181 +0000 UTC m=+145.160926005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.121660 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" event={"ID":"3294dd98-dfda-4f40-bdd8-ad0b8932432d","Type":"ContainerStarted","Data":"d1a034491ed6daf84dc17f5d63044c0801c115dbca562b886eea23ce262bddb3"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.127388 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k9xrn" event={"ID":"5f7fb8a3-2517-48bf-9a10-82725a7391cb","Type":"ContainerStarted","Data":"301aeac89d8e9bdfbe134b9ff92735088e4aeef66bc31303610e344c70b968d6"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.131576 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jc8ph" event={"ID":"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761","Type":"ContainerStarted","Data":"0732e61d68b0c4329e59579400aab67845dd855d6c55216938ffbb04cc930430"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.132765 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.132850 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.199173 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.211887 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.7118687 +0000 UTC m=+145.278172524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.256030 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" event={"ID":"a5c7a47a-7861-4e43-b3f8-a187fc65f041","Type":"ContainerStarted","Data":"008bc110a4150a4f316807d409f4953605063d6dd29936f7f92d10d36fa4241d"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.286284 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" event={"ID":"1d267eea-0fb6-4471-89b8-0de23f0a5873","Type":"ContainerStarted","Data":"7bbbdb2cb42e46ca9f2a0f8a868a834178266e2fb4ab5f633e20ef5d36a3a307"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.297325 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" event={"ID":"1f9e5765-1adb-417b-abbc-82c398a424a2","Type":"ContainerStarted","Data":"9648c6027ced892d4c955f72eeba24b8e0fd3b4dcf8fe20f9c7ee1b7ff7ba83c"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.299186 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" event={"ID":"41fed1a2-7c34-4363-bad0-ac0740961cad","Type":"ContainerStarted","Data":"14455f633a9b5fe51d39c12a8d371f46a45d93dce5bde1dde335216387f3a7d2"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.307097 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.307499 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.807486123 +0000 UTC m=+145.373789937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.317350 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" event={"ID":"01fe4b95-41f9-432d-b597-3941f219b7af","Type":"ContainerStarted","Data":"82ec606e8193c56213694d2d41c5ef8bc902d092f97533a4e819524312ff6ccd"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.318159 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.319159 4984 patch_prober.go:28] interesting pod/console-operator-58897d9998-z7s9j container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.319238 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" podUID="01fe4b95-41f9-432d-b597-3941f219b7af" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.324488 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.336627 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" event={"ID":"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0","Type":"ContainerStarted","Data":"6b862628f7cb17aecedce100d4d5accab2716a2a3969709e0db85105dcaa966a"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.336983 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.350501 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v2prt" event={"ID":"6ca41dbd-8af6-43ac-af3d-b0cc6222264b","Type":"ContainerStarted","Data":"1d107edce64a981b016ac18f64e3952e99a1d1ef26bb18f85c1948ec49ead73c"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.354630 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" podStartSLOduration=123.354607143 podStartE2EDuration="2m3.354607143s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.305446964 +0000 UTC m=+144.871750778" watchObservedRunningTime="2026-01-30 10:14:00.354607143 +0000 UTC m=+144.920910957" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.358474 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tm6bv"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.359255 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" podStartSLOduration=123.35924385 podStartE2EDuration="2m3.35924385s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.340615998 +0000 UTC m=+144.906919822" watchObservedRunningTime="2026-01-30 10:14:00.35924385 +0000 UTC m=+144.925547674" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.363703 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.369499 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.383853 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" event={"ID":"b78342ea-bd31-48b3-b052-638da558730c","Type":"ContainerStarted","Data":"22894fd3f7185098bfb82595039c231f2f5583d91c055ff95ffbf8f516afcd2e"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.395001 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" event={"ID":"bc2c9228-6181-419f-acdb-869007ac6f6c","Type":"ContainerStarted","Data":"007301544b29cc61478615627f8d6ffe84e0d4b61ecae9393a10bb6331168a85"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.399547 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" podStartSLOduration=123.399525667 podStartE2EDuration="2m3.399525667s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.384153175 +0000 UTC m=+144.950456999" watchObservedRunningTime="2026-01-30 10:14:00.399525667 +0000 UTC m=+144.965829491" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.408063 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.409382 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.909366071 +0000 UTC m=+145.475669895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.438702 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" event={"ID":"3c2dcd5a-96f0-48ff-a004-9764d24b66b1","Type":"ContainerStarted","Data":"658b712cfc2bd705261b67aae7bb0f201f65994e338ff510e87909761461e86a"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.453531 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" event={"ID":"fdc71eba-e354-4963-967a-7e1c908467b5","Type":"ContainerStarted","Data":"0699a016951b8f0efc2aa8997f0d6d40413693b73f802c855adceebdccee80aa"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.487960 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" event={"ID":"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0","Type":"ContainerStarted","Data":"b1d81e81f66e6428e996ca7f686ef9e8908aa854c2e4a9d174818f49eb0d428e"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.488725 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.490252 4984 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wc7jt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.490358 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" podUID="ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.504792 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerStarted","Data":"d50bbcffbf98d16fce57cd7c81f40638192b3cecf76451eac0e5109332dde5b2"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.510901 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.512689 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.012665656 +0000 UTC m=+145.578969570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.525801 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" event={"ID":"362ed1a8-599d-44c5-bf2d-d9d7d69517e8","Type":"ContainerStarted","Data":"387b950d461625b54785706d9b2f1c1cb310861069c39459081584bcd5b1b466"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.537927 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" podStartSLOduration=123.537904453 podStartE2EDuration="2m3.537904453s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.47623555 +0000 UTC m=+145.042539374" watchObservedRunningTime="2026-01-30 10:14:00.537904453 +0000 UTC m=+145.104208277" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.538401 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b5gpb" event={"ID":"04dd150e-af11-495b-a44b-10cce42da55b","Type":"ContainerStarted","Data":"7c23a07be811936c7890f48faf43b6c3e725278fa2d110b700a697eb033d36af"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.553811 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" event={"ID":"50a9dda1-acf5-471f-a6cd-46e77a1dfa24","Type":"ContainerStarted","Data":"5519abc75e71a338677d01638f405658fca16ec07102cc2b55d862bdf15b61b0"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.554910 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.565081 4984 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6kcss container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.567016 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" podUID="50a9dda1-acf5-471f-a6cd-46e77a1dfa24" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.589408 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" event={"ID":"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10","Type":"ContainerStarted","Data":"a3161b1f3d59aefbc06ae9fe97490a9eece99847ea063ad7ab287ddaa666fbf1"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.591377 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" event={"ID":"5d031ce5-81d8-4a93-8ef6-a97a86e06195","Type":"ContainerStarted","Data":"f9c17c69f7653da6810adb58a1c7427835a5870597005f5e5fd247c61d834542"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.618771 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.626276 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.126240051 +0000 UTC m=+145.692543875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.629174 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" podStartSLOduration=123.62915174 podStartE2EDuration="2m3.62915174s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.617231155 +0000 UTC m=+145.183534979" watchObservedRunningTime="2026-01-30 10:14:00.62915174 +0000 UTC m=+145.195455564" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.630369 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tnwfs"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.678497 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" event={"ID":"f934f289-4896-49e7-b0ad-12222ed44137","Type":"ContainerStarted","Data":"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.700767 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h9smt"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.704710 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.715095 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fzff9"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.732669 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.733861 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.233848923 +0000 UTC m=+145.800152847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.734009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" event={"ID":"fbdde9dd-69cf-405d-9143-1739e3acbdde","Type":"ContainerStarted","Data":"ec2d22de67b56a877f06438f63a967e0f4c4b09fd390d26e87379805202f3828"} Jan 30 10:14:00 crc kubenswrapper[4984]: W0130 10:14:00.739331 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b635e15_1e86_4142_8e1d_c26628aa2403.slice/crio-a784ebc909a5e20cb706651f5e2a69e9f5a4ecc1bb4f269084b4cb966d0a7f65 WatchSource:0}: Error finding container a784ebc909a5e20cb706651f5e2a69e9f5a4ecc1bb4f269084b4cb966d0a7f65: Status 404 returned error can't find the container with id a784ebc909a5e20cb706651f5e2a69e9f5a4ecc1bb4f269084b4cb966d0a7f65 Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.753615 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.754556 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.833966 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.835946 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.335896976 +0000 UTC m=+145.902200820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.861488 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" podStartSLOduration=123.861456723 podStartE2EDuration="2m3.861456723s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.789653706 +0000 UTC m=+145.355957530" watchObservedRunningTime="2026-01-30 10:14:00.861456723 +0000 UTC m=+145.427760567" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.892817 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jc8ph" podStartSLOduration=123.892799767 podStartE2EDuration="2m3.892799767s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.892036511 +0000 UTC m=+145.458340345" watchObservedRunningTime="2026-01-30 10:14:00.892799767 +0000 UTC m=+145.459103591" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.926151 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" podStartSLOduration=123.926131628 podStartE2EDuration="2m3.926131628s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.921229812 +0000 UTC m=+145.487533646" watchObservedRunningTime="2026-01-30 10:14:00.926131628 +0000 UTC m=+145.492435452" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.936946 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.937364 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.437349759 +0000 UTC m=+146.003653583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.959517 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-b5gpb" podStartSLOduration=123.95950138 podStartE2EDuration="2m3.95950138s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.957469221 +0000 UTC m=+145.523773045" watchObservedRunningTime="2026-01-30 10:14:00.95950138 +0000 UTC m=+145.525805194" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.038834 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.039301 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.539258727 +0000 UTC m=+146.105562551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.054011 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" podStartSLOduration=125.053992637 podStartE2EDuration="2m5.053992637s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.053256262 +0000 UTC m=+145.619560096" watchObservedRunningTime="2026-01-30 10:14:01.053992637 +0000 UTC m=+145.620296471" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.091983 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" podStartSLOduration=124.091966896 podStartE2EDuration="2m4.091966896s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.090385302 +0000 UTC m=+145.656689126" watchObservedRunningTime="2026-01-30 10:14:01.091966896 +0000 UTC m=+145.658270720" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.140986 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.141291 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.641279369 +0000 UTC m=+146.207583193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.186224 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" podStartSLOduration=124.186203374 podStartE2EDuration="2m4.186203374s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.139225869 +0000 UTC m=+145.705529693" watchObservedRunningTime="2026-01-30 10:14:01.186203374 +0000 UTC m=+145.752507198" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.186728 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" podStartSLOduration=124.186723431 podStartE2EDuration="2m4.186723431s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.185476209 +0000 UTC m=+145.751780033" watchObservedRunningTime="2026-01-30 10:14:01.186723431 +0000 UTC m=+145.753027255" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.247137 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.247231 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" podStartSLOduration=124.247222064 podStartE2EDuration="2m4.247222064s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.246690926 +0000 UTC m=+145.812994750" watchObservedRunningTime="2026-01-30 10:14:01.247222064 +0000 UTC m=+145.813525888" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.248073 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.748060843 +0000 UTC m=+146.314364667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.320783 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" podStartSLOduration=124.3207618 podStartE2EDuration="2m4.3207618s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.276495628 +0000 UTC m=+145.842799452" watchObservedRunningTime="2026-01-30 10:14:01.3207618 +0000 UTC m=+145.887065624" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.321259 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" podStartSLOduration=124.321252497 podStartE2EDuration="2m4.321252497s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.318951179 +0000 UTC m=+145.885255003" watchObservedRunningTime="2026-01-30 10:14:01.321252497 +0000 UTC m=+145.887556321" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.349000 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.349426 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.849410232 +0000 UTC m=+146.415714056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.366479 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.380677 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:01 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:01 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:01 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.380729 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.452708 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.453063 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.953049259 +0000 UTC m=+146.519353083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.554933 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.555204 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.055193936 +0000 UTC m=+146.621497760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.656284 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.657007 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.1569855 +0000 UTC m=+146.723289324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.758008 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.758381 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.258368221 +0000 UTC m=+146.824672045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.781468 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" event={"ID":"b78342ea-bd31-48b3-b052-638da558730c","Type":"ContainerStarted","Data":"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.782091 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.794338 4984 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-59vj6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.795308 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.795078 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" event={"ID":"41fed1a2-7c34-4363-bad0-ac0740961cad","Type":"ContainerStarted","Data":"62c7ff7576e10aa88d6094fb824d2beb203320b96e4cc6756e12453560436b88"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.810939 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" event={"ID":"3294dd98-dfda-4f40-bdd8-ad0b8932432d","Type":"ContainerStarted","Data":"03f401ff4572dc2177274e4915ef8233351fde6f62f7b9de1ca9989292a1c703"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.817857 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" event={"ID":"74c9e5fc-e679-408d-ab8e-aab60ca942e9","Type":"ContainerStarted","Data":"92fa44b7b6cb02794397be0e23d525b02239f08cc8c1827cde537a402c4e24a7"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.817901 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" event={"ID":"74c9e5fc-e679-408d-ab8e-aab60ca942e9","Type":"ContainerStarted","Data":"8306bfeb06d314e7b017aee7dae03484d367991e1b2e3816187b786c2073b255"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.824285 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"361331219b350506ddf6d687f4962f0883845f4054d4451565087aa1ea6dec90"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.857515 4984 generic.go:334] "Generic (PLEG): container finished" podID="549b3b6c-e68d-4da4-8780-643fdbf7e4c9" containerID="5ab984a1154eb5ad3a9a3604f83049396ded07ecdfb0542d40af96126a13ab2e" exitCode=0 Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.857843 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" event={"ID":"549b3b6c-e68d-4da4-8780-643fdbf7e4c9","Type":"ContainerDied","Data":"5ab984a1154eb5ad3a9a3604f83049396ded07ecdfb0542d40af96126a13ab2e"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.859088 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.861114 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.361095357 +0000 UTC m=+146.927399181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.865455 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" event={"ID":"fdc71eba-e354-4963-967a-7e1c908467b5","Type":"ContainerStarted","Data":"ef3960325150cd358686864330f0c6b777279c85b1fc615bd6b4b1b6f7ec2df7"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.882315 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" event={"ID":"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10","Type":"ContainerStarted","Data":"aabbc58250375bc5f55d788a4db33ac17e6bdf40642f38d394e4d0be27056f06"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.887418 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" podStartSLOduration=124.887397109 podStartE2EDuration="2m4.887397109s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.883458956 +0000 UTC m=+146.449762780" watchObservedRunningTime="2026-01-30 10:14:01.887397109 +0000 UTC m=+146.453700933" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.887660 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" podStartSLOduration=124.887646238 podStartE2EDuration="2m4.887646238s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.82611092 +0000 UTC m=+146.392414754" watchObservedRunningTime="2026-01-30 10:14:01.887646238 +0000 UTC m=+146.453950052" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.888906 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tnwfs" event={"ID":"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68","Type":"ContainerStarted","Data":"27ea6c50669b8778e541bc64a8b74fb97f659dd744319633faae8d0a0a46df22"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.926821 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" event={"ID":"1f9e5765-1adb-417b-abbc-82c398a424a2","Type":"ContainerStarted","Data":"1ecbdb8ae21a09346a5b98a17ebe388815e843d7fba69d7eb47baedb83c1fbd1"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.956362 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" event={"ID":"362ed1a8-599d-44c5-bf2d-d9d7d69517e8","Type":"ContainerStarted","Data":"a225fd87f86254c5a432f543489cbc555ca160013e66067199b33297621d69db"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.980442 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.984801 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.484782784 +0000 UTC m=+147.051086608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.002397 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" event={"ID":"b8fd0694-7375-4f0f-8cf1-84af752803b6","Type":"ContainerStarted","Data":"b485f6f7c6190f154740f68b4417664590c5b58ea29c2a4671113230856baab8"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.003474 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" podStartSLOduration=125.003455558 podStartE2EDuration="2m5.003455558s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.002741974 +0000 UTC m=+146.569045798" watchObservedRunningTime="2026-01-30 10:14:02.003455558 +0000 UTC m=+146.569759382" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.003932 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" podStartSLOduration=126.003910133 podStartE2EDuration="2m6.003910133s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.929649143 +0000 UTC m=+146.495952957" watchObservedRunningTime="2026-01-30 10:14:02.003910133 +0000 UTC m=+146.570213957" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.005672 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" event={"ID":"fbdde9dd-69cf-405d-9143-1739e3acbdde","Type":"ContainerStarted","Data":"b0a94db102107430e1a69e0b74ea3c70e83060546d2f77d6bc452f21055f639a"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.056984 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tm6bv" event={"ID":"e9a01c47-eab2-4990-a659-a1f15a8176dd","Type":"ContainerStarted","Data":"8d3cfa56fa77b180b141579185dcc64fe220c1a1d586a3d6de43c2f635115011"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.057312 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tm6bv" event={"ID":"e9a01c47-eab2-4990-a659-a1f15a8176dd","Type":"ContainerStarted","Data":"fdb7a6a725360fa79f97561c6ba28587f8b87d80741083b568bdbb89ce461b6b"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.066913 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" event={"ID":"1d267eea-0fb6-4471-89b8-0de23f0a5873","Type":"ContainerStarted","Data":"fbc4e3c07cfdea4da11f5f99b46fa46449c85ad088ad5487cb4736ba61736857"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.066965 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" event={"ID":"1d267eea-0fb6-4471-89b8-0de23f0a5873","Type":"ContainerStarted","Data":"54e5ed88926849caedd1613baf95128e3c720a8e4ac5fec265c0105b7a4d2461"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.069754 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k9xrn" event={"ID":"5f7fb8a3-2517-48bf-9a10-82725a7391cb","Type":"ContainerStarted","Data":"db22e08b1375cbe91c0c6691ee5283a52b3cdcdd23ece840b610c8292748b312"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.076488 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerStarted","Data":"55eaba56218414e484072cae8971ca1d79d72d34071e7b9195611d687822bbb7"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.087115 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.088588 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.588568156 +0000 UTC m=+147.154871980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.118470 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" podStartSLOduration=125.11844821 podStartE2EDuration="2m5.11844821s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.086692713 +0000 UTC m=+146.652996537" watchObservedRunningTime="2026-01-30 10:14:02.11844821 +0000 UTC m=+146.684752024" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.121210 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" event={"ID":"3c2dcd5a-96f0-48ff-a004-9764d24b66b1","Type":"ContainerStarted","Data":"abe2c9aaa9e673f7aff58c200f88224065d1a83cc592682f1c5c1ab56a634d63"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.145500 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" event={"ID":"689169b7-2cad-4763-9b8d-fdb50126ec69","Type":"ContainerStarted","Data":"894abc265f760e03c040845c77138ba87077a2802e9bced39e83f05aed031205"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.186463 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" podStartSLOduration=125.186441078 podStartE2EDuration="2m5.186441078s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.174877345 +0000 UTC m=+146.741181169" watchObservedRunningTime="2026-01-30 10:14:02.186441078 +0000 UTC m=+146.752744902" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.191333 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.193948 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" event={"ID":"7b635e15-1e86-4142-8e1d-c26628aa2403","Type":"ContainerStarted","Data":"a784ebc909a5e20cb706651f5e2a69e9f5a4ecc1bb4f269084b4cb966d0a7f65"} Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.195111 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.695099702 +0000 UTC m=+147.261403526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.221083 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" podStartSLOduration=125.221069053 podStartE2EDuration="2m5.221069053s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.220037298 +0000 UTC m=+146.786341122" watchObservedRunningTime="2026-01-30 10:14:02.221069053 +0000 UTC m=+146.787372877" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.238795 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v2prt" event={"ID":"6ca41dbd-8af6-43ac-af3d-b0cc6222264b","Type":"ContainerStarted","Data":"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.262760 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerStarted","Data":"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.263961 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.265247 4984 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9lf7j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.265293 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.292778 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.293875 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.793859873 +0000 UTC m=+147.360163697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.297770 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" event={"ID":"3cb637fe-7a94-4790-abf9-3beb38ecb8da","Type":"ContainerStarted","Data":"659c778caa3e998286e950e5d885087dc705ae499832fe2cf9924d02ed342f9f"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.325064 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" podStartSLOduration=125.325042921 podStartE2EDuration="2m5.325042921s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.262503219 +0000 UTC m=+146.828807043" watchObservedRunningTime="2026-01-30 10:14:02.325042921 +0000 UTC m=+146.891346745" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.329250 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tm6bv" podStartSLOduration=7.329236414 podStartE2EDuration="7.329236414s" podCreationTimestamp="2026-01-30 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.326987937 +0000 UTC m=+146.893291761" watchObservedRunningTime="2026-01-30 10:14:02.329236414 +0000 UTC m=+146.895540238" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.329769 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-k9xrn" podStartSLOduration=7.329763211 podStartE2EDuration="7.329763211s" podCreationTimestamp="2026-01-30 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.298810981 +0000 UTC m=+146.865114805" watchObservedRunningTime="2026-01-30 10:14:02.329763211 +0000 UTC m=+146.896067035" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.335514 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" event={"ID":"477b0c18-df7c-46c8-bae3-d0dda1af580c","Type":"ContainerStarted","Data":"5b097930b0a77fb7cba6077c842b0603a915895cc9d97f95fb07f5eaabb800cf"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.335554 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" event={"ID":"477b0c18-df7c-46c8-bae3-d0dda1af580c","Type":"ContainerStarted","Data":"f5fa7eaf7427770821c1d1a4aa6a34a00402375e2d2121f48073834f41a6f193"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.360992 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" podStartSLOduration=125.360976081 podStartE2EDuration="2m5.360976081s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.360000458 +0000 UTC m=+146.926304282" watchObservedRunningTime="2026-01-30 10:14:02.360976081 +0000 UTC m=+146.927279905" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.384668 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" event={"ID":"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0","Type":"ContainerStarted","Data":"5f78f999a45e026532cff91c591535f5cde36cc8ad11d2e857b7e9de6f79e4c9"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.393615 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:02 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:02 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:02 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.393666 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.394403 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.397643 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.897630765 +0000 UTC m=+147.463934589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.435057 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" event={"ID":"91c03f30-b334-480b-937d-15b6d0b493a7","Type":"ContainerStarted","Data":"99230cb91a992a249fb9a2383a85874916b90baf09ff9dfc5c1ff9f683675c8e"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.435102 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" event={"ID":"91c03f30-b334-480b-937d-15b6d0b493a7","Type":"ContainerStarted","Data":"0d52af09a1f62b57d67e6e9a1549447344849457cc426877b777ed6ef0eb72ca"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.476741 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" event={"ID":"19fa971c-228f-4457-81be-b2d9220ce27f","Type":"ContainerStarted","Data":"35596638ffd8fdd19d27c9adf74965850144dc178e41450f9ad3ace770880810"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.483344 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.483488 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.495531 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.496853 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.99681163 +0000 UTC m=+147.563115504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.530779 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.538694 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" podStartSLOduration=125.538672101 podStartE2EDuration="2m5.538672101s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.483093875 +0000 UTC m=+147.049397699" watchObservedRunningTime="2026-01-30 10:14:02.538672101 +0000 UTC m=+147.104975925" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.552171 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.606949 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-v2prt" podStartSLOduration=125.606908367 podStartE2EDuration="2m5.606908367s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.60612967 +0000 UTC m=+147.172433494" watchObservedRunningTime="2026-01-30 10:14:02.606908367 +0000 UTC m=+147.173212191" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.607796 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.608983 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" podStartSLOduration=125.608968667 podStartE2EDuration="2m5.608968667s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.544319203 +0000 UTC m=+147.110623037" watchObservedRunningTime="2026-01-30 10:14:02.608968667 +0000 UTC m=+147.175272491" Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.617826 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.117779336 +0000 UTC m=+147.684083160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.705375 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" podStartSLOduration=125.705359148 podStartE2EDuration="2m5.705359148s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.704973495 +0000 UTC m=+147.271277319" watchObservedRunningTime="2026-01-30 10:14:02.705359148 +0000 UTC m=+147.271662972" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.707409 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" podStartSLOduration=125.707404217 podStartE2EDuration="2m5.707404217s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.647505254 +0000 UTC m=+147.213809078" watchObservedRunningTime="2026-01-30 10:14:02.707404217 +0000 UTC m=+147.273708041" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.709763 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.709985 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.209952314 +0000 UTC m=+147.776256148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.710060 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.710483 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.210471541 +0000 UTC m=+147.776775445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.812857 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.813625 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.313608051 +0000 UTC m=+147.879911875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.818330 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.818610 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" podStartSLOduration=125.81859369 podStartE2EDuration="2m5.81859369s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.815065401 +0000 UTC m=+147.381369225" watchObservedRunningTime="2026-01-30 10:14:02.81859369 +0000 UTC m=+147.384897514" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.819388 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.823776 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.856078 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.915657 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.915690 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.915767 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.915802 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.916498 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.416486023 +0000 UTC m=+147.982789847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.003404 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.003469 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.020675 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.020937 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.021070 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.021101 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.021395 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.521370242 +0000 UTC m=+148.087674066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.021894 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.021992 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.048858 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.049971 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.069150 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.086621 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.121936 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.122339 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.622323548 +0000 UTC m=+148.188627372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.137424 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.155382 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.208459 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.209370 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.222940 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.223198 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.223262 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.223301 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.223396 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.723380307 +0000 UTC m=+148.289684131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.312230 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325549 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325586 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325605 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325647 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325694 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.326005 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.326358 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.826347012 +0000 UTC m=+148.392650826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.326930 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.327216 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.394972 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.396519 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.405049 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.412103 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:03 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:03 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:03 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.412154 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.418984 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.428036 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.428314 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.428390 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.428428 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.428481 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.928456737 +0000 UTC m=+148.494760571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.429688 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.429803 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.458934 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.485053 4984 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wc7jt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.485100 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" podUID="ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.506550 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" event={"ID":"7b635e15-1e86-4142-8e1d-c26628aa2403","Type":"ContainerStarted","Data":"139c935d022e110cca3f17545792f356a511e9802b101812f881010ad59bef14"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.512161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" event={"ID":"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0","Type":"ContainerStarted","Data":"fcb97bf33d069b8cf02220c164c204c2f9a6b32968c22071a9fb399b64f2f155"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.518486 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tnwfs" event={"ID":"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68","Type":"ContainerStarted","Data":"1f92843512b822c91ad12848a5264deee50c474ed31ae69b734c1bed3b655aa3"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.518516 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tnwfs" event={"ID":"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68","Type":"ContainerStarted","Data":"fadb0631e77c9a49aef94f56ac428cc8611d35c5973e9214484a2c03d99c334d"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.519078 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-tnwfs" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.523369 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" event={"ID":"b8fd0694-7375-4f0f-8cf1-84af752803b6","Type":"ContainerStarted","Data":"85b351802096b6ac17dabed2e20ea9fa0e83751307a3586fd3de7c6fb45b836e"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.523396 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" event={"ID":"b8fd0694-7375-4f0f-8cf1-84af752803b6","Type":"ContainerStarted","Data":"9b9bce640c20806c5b47bce08964584215fed903de8ecd707db12c742aab15b0"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.523868 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.530920 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.530959 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.531005 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.531022 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.531300 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.031288557 +0000 UTC m=+148.597592381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547208 4984 generic.go:334] "Generic (PLEG): container finished" podID="19fa971c-228f-4457-81be-b2d9220ce27f" containerID="8c4a288bbf3ca8d659e3baa40a24d3c4957909b7c5275ae792a692c37d549c9b" exitCode=0 Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547307 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" event={"ID":"19fa971c-228f-4457-81be-b2d9220ce27f","Type":"ContainerDied","Data":"8c4a288bbf3ca8d659e3baa40a24d3c4957909b7c5275ae792a692c37d549c9b"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547336 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" event={"ID":"19fa971c-228f-4457-81be-b2d9220ce27f","Type":"ContainerStarted","Data":"9408d0452c9224851a6729b0f77776884723b08217d286d0acfe04e0eec03974"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547856 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547952 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.568605 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tnwfs" podStartSLOduration=8.568584392 podStartE2EDuration="8.568584392s" podCreationTimestamp="2026-01-30 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.542488107 +0000 UTC m=+148.108791931" watchObservedRunningTime="2026-01-30 10:14:03.568584392 +0000 UTC m=+148.134888216" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.569583 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" podStartSLOduration=126.569575836 podStartE2EDuration="2m6.569575836s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.564207914 +0000 UTC m=+148.130511738" watchObservedRunningTime="2026-01-30 10:14:03.569575836 +0000 UTC m=+148.135879660" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.578068 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" event={"ID":"41fed1a2-7c34-4363-bad0-ac0740961cad","Type":"ContainerStarted","Data":"60c291e23734e20d25712ba6ef2a8740692c5179714a2fde29d37cd9fe11106e"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.594671 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" podStartSLOduration=126.594633696 podStartE2EDuration="2m6.594633696s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.591687216 +0000 UTC m=+148.157991040" watchObservedRunningTime="2026-01-30 10:14:03.594633696 +0000 UTC m=+148.160937520" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.605578 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"ebe3ef114cbb4782c56c4ed7042911f23e7df193e419a7215250a01f79b2e14b"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.609967 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" event={"ID":"549b3b6c-e68d-4da4-8780-643fdbf7e4c9","Type":"ContainerStarted","Data":"4a571e6f19403b2302694f635424d2d4d26ff0fd7f60fabcf4b66a3ab4356616"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.617856 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" podStartSLOduration=126.617839474 podStartE2EDuration="2m6.617839474s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.614800811 +0000 UTC m=+148.181104635" watchObservedRunningTime="2026-01-30 10:14:03.617839474 +0000 UTC m=+148.184143298" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.636857 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637010 4984 generic.go:334] "Generic (PLEG): container finished" podID="3cb637fe-7a94-4790-abf9-3beb38ecb8da" containerID="284055d70f4b9c5e29f2b0a0e012e6be97affdaba213aca86b1c2703e3eb6309" exitCode=0 Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637107 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637119 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" event={"ID":"3cb637fe-7a94-4790-abf9-3beb38ecb8da","Type":"ContainerDied","Data":"284055d70f4b9c5e29f2b0a0e012e6be97affdaba213aca86b1c2703e3eb6309"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637142 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637153 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" event={"ID":"3cb637fe-7a94-4790-abf9-3beb38ecb8da","Type":"ContainerStarted","Data":"ec2206fbe3e9720b633cbd58dd1bf1a7409869b6990e0f322d4c2d92b687acb6"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637166 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" event={"ID":"3cb637fe-7a94-4790-abf9-3beb38ecb8da","Type":"ContainerStarted","Data":"74f668e3f0f9dd5f82298b56b2689255a738ad0f5dbd9f39555596f07229bc56"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637349 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.648165 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.148141612 +0000 UTC m=+148.714445436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.648575 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.649118 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.653509 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" podStartSLOduration=126.653491274 podStartE2EDuration="2m6.653491274s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.65013104 +0000 UTC m=+148.216434874" watchObservedRunningTime="2026-01-30 10:14:03.653491274 +0000 UTC m=+148.219795098" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.672959 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" event={"ID":"91c03f30-b334-480b-937d-15b6d0b493a7","Type":"ContainerStarted","Data":"4f4c54755db5e790fb5039c31a88da62c24a44ae9e57e0995ab21128b0e61464"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.679943 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.696160 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.697269 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.709507 4984 csr.go:261] certificate signing request csr-br585 is approved, waiting to be issued Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.720043 4984 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9lf7j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.720090 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.723029 4984 csr.go:257] certificate signing request csr-br585 is issued Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.735344 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerStarted","Data":"a346a9827a841476aaa45784e89737d7660e442268d7e7f018b3d6a2ac74ea0e"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.735393 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerStarted","Data":"ccbccd64af542f90ca0aef149e38ed1030deb5adc4ad6b563b7730a8d68baa45"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.735628 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.736071 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" podStartSLOduration=126.736055856 podStartE2EDuration="2m6.736055856s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.711017586 +0000 UTC m=+148.277321410" watchObservedRunningTime="2026-01-30 10:14:03.736055856 +0000 UTC m=+148.302359670" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.738798 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.742734 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.242713062 +0000 UTC m=+148.809016886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.755439 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.788382 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" podStartSLOduration=126.788367711 podStartE2EDuration="2m6.788367711s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.739848144 +0000 UTC m=+148.306151968" watchObservedRunningTime="2026-01-30 10:14:03.788367711 +0000 UTC m=+148.354671535" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.839396 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.840569 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.340545941 +0000 UTC m=+148.906849755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.840637 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.841471 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.341463352 +0000 UTC m=+148.907767166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.854557 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.943994 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.944520 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.444496648 +0000 UTC m=+149.010800472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.961876 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.962186 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.462174848 +0000 UTC m=+149.028478672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.064159 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.064564 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.564547652 +0000 UTC m=+149.130851476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.165851 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.166314 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.666294115 +0000 UTC m=+149.232598009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.185890 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:14:04 crc kubenswrapper[4984]: W0130 10:14:04.221017 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod874a87b2_c81a_4ce9_85c6_c41d18835f35.slice/crio-d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2 WatchSource:0}: Error finding container d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2: Status 404 returned error can't find the container with id d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2 Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.275474 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.276024 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.776008059 +0000 UTC m=+149.342311883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.369789 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:04 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:04 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:04 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.369849 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.376774 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.377095 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.877079979 +0000 UTC m=+149.443383803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.401406 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.477588 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.478017 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.977556418 +0000 UTC m=+149.543860242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.478050 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.478474 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.978451049 +0000 UTC m=+149.544754873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.541416 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.541478 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.543367 4984 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fzff9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.21:8443/livez\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.543422 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" podUID="3cb637fe-7a94-4790-abf9-3beb38ecb8da" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.21:8443/livez\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.579707 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.579869 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.079843019 +0000 UTC m=+149.646146843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.580054 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.580445 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.080433039 +0000 UTC m=+149.646736863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.638498 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:14:04 crc kubenswrapper[4984]: W0130 10:14:04.642588 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33689f3c_1867_4707_a8c2_ed56c467cff6.slice/crio-b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b WatchSource:0}: Error finding container b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b: Status 404 returned error can't find the container with id b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.681014 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.681191 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.181165878 +0000 UTC m=+149.747469702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.681348 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.681686 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.181672385 +0000 UTC m=+149.747976209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.722504 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerStarted","Data":"b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.723268 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerStarted","Data":"d624716dec815a31dc6fb1b18652f2e1a4591d64f410b4c644c4fb229fcd424e"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.723787 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 10:09:03 +0000 UTC, rotation deadline is 2026-12-18 07:44:20.218263615 +0000 UTC Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.723840 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7725h30m15.494426919s for next certificate rotation Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.723972 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerStarted","Data":"d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.725127 4984 generic.go:334] "Generic (PLEG): container finished" podID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerID="1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba" exitCode=0 Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.725229 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerDied","Data":"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.725287 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerStarted","Data":"ff00561d64a6b687fd04a281bc0b10957180facce6b051e1ab6f63d8c0e3e399"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.726538 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.783345 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.783744 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.283730039 +0000 UTC m=+149.850033863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.820204 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.821718 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.824124 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.830785 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.884947 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.892399 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.392384496 +0000 UTC m=+149.958688320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.986813 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.986958 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.987019 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.486988426 +0000 UTC m=+150.053292300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.987262 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.987349 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.987606 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.987698 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.48768643 +0000 UTC m=+150.053990334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.088706 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.088905 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.588876844 +0000 UTC m=+150.155180678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.088970 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089009 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089035 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089068 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089107 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089157 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089199 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089233 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.089402 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.589388381 +0000 UTC m=+150.155692205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089572 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089700 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.094922 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.095106 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.095319 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.106817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.108835 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.113703 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.130138 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.144695 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.190455 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.190637 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.690607856 +0000 UTC m=+150.256911680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.190713 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.190785 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.191051 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.691040331 +0000 UTC m=+150.257344255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.197825 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.204329 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.205220 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.218319 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.259531 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.292299 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.292461 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.792431592 +0000 UTC m=+150.358735416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.292853 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.293230 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.793206628 +0000 UTC m=+150.359510452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.317151 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.368768 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:05 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:05 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:05 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.368822 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.396558 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.396815 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.396888 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.396904 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.397203 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.897189057 +0000 UTC m=+150.463492881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499164 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499193 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499226 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499275 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499672 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499872 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.500365 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.000354588 +0000 UTC m=+150.566658402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.518629 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.577533 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.592855 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.600488 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.600850 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.100835368 +0000 UTC m=+150.667139192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.662725 4984 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.701926 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.702530 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.202513879 +0000 UTC m=+150.768817703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.705473 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sdmkd"] Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.735250 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bda5ab0954820602ce380a42cd1e8b0b38108cdbead38acf61bf495e86970ba1"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.783866 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"f8f7cfe79fef7ae968f236d572066f31a2aa493df81e9080bec8b804f970e9d4"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.784038 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"97fd18183bcfdbb0e61d75bea2054aa48b8548cbfb4a2e4b037d612035360967"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.788747 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerStarted","Data":"4aa1ead20f9be6ec24d5528456caa32578bf134deec9e2dc8d7d858e101255c0"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.803643 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.803865 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.303832887 +0000 UTC m=+150.870136711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.803954 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.804607 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.304596573 +0000 UTC m=+150.870900397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.811230 4984 generic.go:334] "Generic (PLEG): container finished" podID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerID="b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26" exitCode=0 Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.811308 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerDied","Data":"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.814127 4984 generic.go:334] "Generic (PLEG): container finished" podID="b628557d-490d-4803-8ae3-fde88678c6a4" containerID="f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8" exitCode=0 Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.814161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerDied","Data":"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.818702 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b3315c76b287e825e2d6771f21ffd188461e9cad424dfec0c9a1df41afb52803"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.818943 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.833761 4984 generic.go:334] "Generic (PLEG): container finished" podID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerID="de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7" exitCode=0 Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.835100 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerDied","Data":"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.860330 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.907779 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.908552 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.40852598 +0000 UTC m=+150.974829804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.924692 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.933547 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.433508998 +0000 UTC m=+150.999812822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.941923 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.025930 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:06 crc kubenswrapper[4984]: E0130 10:14:06.027712 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.527695364 +0000 UTC m=+151.093999188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.127617 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: E0130 10:14:06.127887 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.627872583 +0000 UTC m=+151.194176407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.201900 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.204587 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.205100 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.206431 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.229737 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:06 crc kubenswrapper[4984]: E0130 10:14:06.230169 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.730131664 +0000 UTC m=+151.296435488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.230561 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: E0130 10:14:06.230977 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.730969282 +0000 UTC m=+151.297273106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.259432 4984 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T10:14:05.662746189Z","Handler":null,"Name":""} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.261848 4984 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.261890 4984 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.331368 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.331710 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.331765 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.331815 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.335489 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.370958 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:06 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:06 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:06 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.371019 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.433953 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434043 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434124 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434166 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434572 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434602 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.437504 4984 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.437556 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.456153 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.462725 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.583739 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.592390 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.593710 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.601122 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.707223 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.738429 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.738477 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.738519 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.841661 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.841838 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.841875 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.842527 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.842547 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.851452 4984 generic.go:334] "Generic (PLEG): container finished" podID="fbdde9dd-69cf-405d-9143-1739e3acbdde" containerID="b0a94db102107430e1a69e0b74ea3c70e83060546d2f77d6bc452f21055f639a" exitCode=0 Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.851539 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" event={"ID":"fbdde9dd-69cf-405d-9143-1739e3acbdde","Type":"ContainerDied","Data":"b0a94db102107430e1a69e0b74ea3c70e83060546d2f77d6bc452f21055f639a"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.862501 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.884975 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"05fec0f570dac27c27647b10ad3ced36b18b0435ab601e2a61ac806860f3e111"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.893899 4984 generic.go:334] "Generic (PLEG): container finished" podID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerID="2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f" exitCode=0 Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.893963 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerDied","Data":"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.898604 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" event={"ID":"cec0ee98-d570-417f-a2fb-7ac19e3b25c0","Type":"ContainerStarted","Data":"74f7d28a0e08be422a754d690835c9c115cca8bbbefc3a7dbf062e3a55be3cc7"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.898682 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" event={"ID":"cec0ee98-d570-417f-a2fb-7ac19e3b25c0","Type":"ContainerStarted","Data":"c1c0be0897c74e1e5bfac7e6f04a7470b6e71b54d2fad486fff495474bb80321"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.898693 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" event={"ID":"cec0ee98-d570-417f-a2fb-7ac19e3b25c0","Type":"ContainerStarted","Data":"b1afca7b810b63c91ec3da8b1b91ffda7f897669d0a1939e818f008318abe3cb"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.900460 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e217a7c14d9c317dc8379cac46710f87e935131cc71b01134b24c83dfa7adf0f"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.902300 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"159dd998b9dd4a12a42f6798a58a9e84c28bee47d9ddf7c7ac1aae5f3aaad36e"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.905418 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" podStartSLOduration=11.90540608 podStartE2EDuration="11.90540608s" podCreationTimestamp="2026-01-30 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:06.904599123 +0000 UTC m=+151.470902947" watchObservedRunningTime="2026-01-30 10:14:06.90540608 +0000 UTC m=+151.471709904" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.907555 4984 generic.go:334] "Generic (PLEG): container finished" podID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerID="f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943" exitCode=0 Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.907662 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerDied","Data":"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.907694 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerStarted","Data":"a250f9ff70a0c171ec7466406231de39859f7a4b3ffd9f714f62126a8f50f17b"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.912877 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"746156d6ab9e2462688e32420a534fbded046e889a3daef751744861b3e18a38"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.912912 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7038f5db64c7b4198b177efa9586a34734ab4d5fb4cd45f1556d88899fdaeb91"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.922632 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.945323 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sdmkd" podStartSLOduration=130.945302654 podStartE2EDuration="2m10.945302654s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:06.93870789 +0000 UTC m=+151.505011714" watchObservedRunningTime="2026-01-30 10:14:06.945302654 +0000 UTC m=+151.511606478" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.054196 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:14:07 crc kubenswrapper[4984]: W0130 10:14:07.101904 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94ba287c_b444_471f_8be9_e1c553ee251e.slice/crio-1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01 WatchSource:0}: Error finding container 1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01: Status 404 returned error can't find the container with id 1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01 Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.208009 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.213003 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.215613 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.218654 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.218813 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.270104 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.355224 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.355630 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.368535 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:07 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:07 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:07 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.368588 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.457114 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.457203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.459181 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.475038 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.500991 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:14:07 crc kubenswrapper[4984]: W0130 10:14:07.511778 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccdbdcd6_0816_4dc6_bdb2_e3088376d3fe.slice/crio-885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455 WatchSource:0}: Error finding container 885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455: Status 404 returned error can't find the container with id 885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455 Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.541885 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.944561 4984 generic.go:334] "Generic (PLEG): container finished" podID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerID="3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6" exitCode=0 Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.944744 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerDied","Data":"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6"} Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.944906 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerStarted","Data":"885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455"} Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.979044 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" event={"ID":"d3d42d7f-49ec-4169-a79d-f46ccd275e20","Type":"ContainerStarted","Data":"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839"} Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.979088 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" event={"ID":"d3d42d7f-49ec-4169-a79d-f46ccd275e20","Type":"ContainerStarted","Data":"79705b85e33c0776d034e28c0f0671763dc639d3eee8637beef1fb06cd051685"} Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.979108 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.004518 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" podStartSLOduration=131.004502158 podStartE2EDuration="2m11.004502158s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:08.002074536 +0000 UTC m=+152.568378360" watchObservedRunningTime="2026-01-30 10:14:08.004502158 +0000 UTC m=+152.570805982" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.017490 4984 generic.go:334] "Generic (PLEG): container finished" podID="94ba287c-b444-471f-8be9-e1c553ee251e" containerID="ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193" exitCode=0 Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.018552 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerDied","Data":"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193"} Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.018580 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerStarted","Data":"1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01"} Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.049926 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.049983 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.050226 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.050269 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.073690 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.123672 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.124501 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.124523 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.130421 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.342492 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.342837 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.348071 4984 patch_prober.go:28] interesting pod/console-f9d7485db-v2prt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.348128 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-v2prt" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.365692 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.376077 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:08 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:08 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:08 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.376706 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.416915 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.456208 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.579548 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") pod \"fbdde9dd-69cf-405d-9143-1739e3acbdde\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.579817 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"fbdde9dd-69cf-405d-9143-1739e3acbdde\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.579856 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") pod \"fbdde9dd-69cf-405d-9143-1739e3acbdde\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.581169 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume" (OuterVolumeSpecName: "config-volume") pod "fbdde9dd-69cf-405d-9143-1739e3acbdde" (UID: "fbdde9dd-69cf-405d-9143-1739e3acbdde"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.588012 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fbdde9dd-69cf-405d-9143-1739e3acbdde" (UID: "fbdde9dd-69cf-405d-9143-1739e3acbdde"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.588402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69" (OuterVolumeSpecName: "kube-api-access-dnv69") pod "fbdde9dd-69cf-405d-9143-1739e3acbdde" (UID: "fbdde9dd-69cf-405d-9143-1739e3acbdde"). InnerVolumeSpecName "kube-api-access-dnv69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.681883 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.681932 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.681942 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.041736 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3a839741-51fc-4340-8210-3c29bae228c0","Type":"ContainerStarted","Data":"168ae950583372bca0265bccae669727488cb747b78f18d7f77c7b4c7e81d236"} Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.041778 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3a839741-51fc-4340-8210-3c29bae228c0","Type":"ContainerStarted","Data":"f69e16f549627ffe3c74b15b970c9a739dca0bbacd25c33919f5b9e3ae398142"} Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.054242 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" event={"ID":"fbdde9dd-69cf-405d-9143-1739e3acbdde","Type":"ContainerDied","Data":"ec2d22de67b56a877f06438f63a967e0f4c4b09fd390d26e87379805202f3828"} Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.054317 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec2d22de67b56a877f06438f63a967e0f4c4b09fd390d26e87379805202f3828" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.054392 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.065174 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.065152402 podStartE2EDuration="2.065152402s" podCreationTimestamp="2026-01-30 10:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:09.056967295 +0000 UTC m=+153.623271119" watchObservedRunningTime="2026-01-30 10:14:09.065152402 +0000 UTC m=+153.631456226" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.077356 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.368033 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:09 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:09 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:09 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.368526 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.547524 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.564009 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.071122 4984 generic.go:334] "Generic (PLEG): container finished" podID="3a839741-51fc-4340-8210-3c29bae228c0" containerID="168ae950583372bca0265bccae669727488cb747b78f18d7f77c7b4c7e81d236" exitCode=0 Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.071469 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3a839741-51fc-4340-8210-3c29bae228c0","Type":"ContainerDied","Data":"168ae950583372bca0265bccae669727488cb747b78f18d7f77c7b4c7e81d236"} Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.366048 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:10 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:10 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:10 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.366121 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.738901 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 10:14:10 crc kubenswrapper[4984]: E0130 10:14:10.739163 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbdde9dd-69cf-405d-9143-1739e3acbdde" containerName="collect-profiles" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.739177 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbdde9dd-69cf-405d-9143-1739e3acbdde" containerName="collect-profiles" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.739308 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbdde9dd-69cf-405d-9143-1739e3acbdde" containerName="collect-profiles" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.739693 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.740717 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.755039 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.755314 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.836224 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.836370 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.937470 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.937526 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.937646 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.954604 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:11 crc kubenswrapper[4984]: I0130 10:14:11.083579 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:11 crc kubenswrapper[4984]: I0130 10:14:11.368053 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:11 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:11 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:11 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:11 crc kubenswrapper[4984]: I0130 10:14:11.368437 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:11 crc kubenswrapper[4984]: I0130 10:14:11.609563 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 10:14:12 crc kubenswrapper[4984]: I0130 10:14:12.367837 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:14:12 crc kubenswrapper[4984]: I0130 10:14:12.375718 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:14:13 crc kubenswrapper[4984]: I0130 10:14:13.127097 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-t8vjw_a2849d59-5121-45c3-bf3c-41c83a87827c/cluster-samples-operator/0.log" Jan 30 10:14:13 crc kubenswrapper[4984]: I0130 10:14:13.127150 4984 generic.go:334] "Generic (PLEG): container finished" podID="a2849d59-5121-45c3-bf3c-41c83a87827c" containerID="a346a9827a841476aaa45784e89737d7660e442268d7e7f018b3d6a2ac74ea0e" exitCode=2 Jan 30 10:14:13 crc kubenswrapper[4984]: I0130 10:14:13.127318 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerDied","Data":"a346a9827a841476aaa45784e89737d7660e442268d7e7f018b3d6a2ac74ea0e"} Jan 30 10:14:13 crc kubenswrapper[4984]: I0130 10:14:13.128780 4984 scope.go:117] "RemoveContainer" containerID="a346a9827a841476aaa45784e89737d7660e442268d7e7f018b3d6a2ac74ea0e" Jan 30 10:14:14 crc kubenswrapper[4984]: I0130 10:14:14.099571 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tnwfs" Jan 30 10:14:18 crc kubenswrapper[4984]: I0130 10:14:18.052705 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:14:18 crc kubenswrapper[4984]: I0130 10:14:18.348863 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:14:18 crc kubenswrapper[4984]: I0130 10:14:18.352117 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:14:19 crc kubenswrapper[4984]: W0130 10:14:19.200389 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcb28ac48_0559_44f0_b620_ad0eae3e3efb.slice/crio-dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81 WatchSource:0}: Error finding container dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81: Status 404 returned error can't find the container with id dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81 Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.283156 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.357449 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") pod \"3a839741-51fc-4340-8210-3c29bae228c0\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.357502 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") pod \"3a839741-51fc-4340-8210-3c29bae228c0\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.357709 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3a839741-51fc-4340-8210-3c29bae228c0" (UID: "3a839741-51fc-4340-8210-3c29bae228c0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.357910 4984 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.364828 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3a839741-51fc-4340-8210-3c29bae228c0" (UID: "3a839741-51fc-4340-8210-3c29bae228c0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.459029 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.181987 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-t8vjw_a2849d59-5121-45c3-bf3c-41c83a87827c/cluster-samples-operator/0.log" Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.182313 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerStarted","Data":"af122d465277324b26aa370cf9c30cad1ba1d9748b41f88e445b70b061317ab8"} Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.184173 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb28ac48-0559-44f0-b620-ad0eae3e3efb","Type":"ContainerStarted","Data":"dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81"} Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.185744 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3a839741-51fc-4340-8210-3c29bae228c0","Type":"ContainerDied","Data":"f69e16f549627ffe3c74b15b970c9a739dca0bbacd25c33919f5b9e3ae398142"} Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.185771 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f69e16f549627ffe3c74b15b970c9a739dca0bbacd25c33919f5b9e3ae398142" Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.185839 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:26 crc kubenswrapper[4984]: I0130 10:14:26.715593 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:33 crc kubenswrapper[4984]: I0130 10:14:33.000545 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:14:33 crc kubenswrapper[4984]: I0130 10:14:33.000862 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:14:35 crc kubenswrapper[4984]: I0130 10:14:35.205876 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:36 crc kubenswrapper[4984]: E0130 10:14:36.813908 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 10:14:36 crc kubenswrapper[4984]: E0130 10:14:36.814483 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7xqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8cnkg_openshift-marketplace(4aab6e83-8a77-45ad-aa28-fe2c519133fb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:36 crc kubenswrapper[4984]: E0130 10:14:36.815838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8cnkg" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" Jan 30 10:14:38 crc kubenswrapper[4984]: I0130 10:14:38.796407 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.409609 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8cnkg" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.484202 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.484600 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b9h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dk77x_openshift-marketplace(874a87b2-c81a-4ce9-85c6-c41d18835f35): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.486109 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dk77x" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.501350 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.501504 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hs7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9vv7r_openshift-marketplace(44e02fc4-8da4-4122-bd3a-9b8f9734ec59): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.502715 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9vv7r" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" Jan 30 10:14:42 crc kubenswrapper[4984]: E0130 10:14:42.995002 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9vv7r" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" Jan 30 10:14:42 crc kubenswrapper[4984]: E0130 10:14:42.995011 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dk77x" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.069491 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.069633 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgrgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w4cgz_openshift-marketplace(b628557d-490d-4803-8ae3-fde88678c6a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.071681 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-w4cgz" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.076642 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.076978 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fd8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-njw8t_openshift-marketplace(33689f3c-1867-4707-a8c2-ed56c467cff6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.078792 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-njw8t" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.850167 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w4cgz" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.850286 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-njw8t" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.975781 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.975959 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8p7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vzmvg_openshift-marketplace(ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.977459 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vzmvg" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.980073 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.980189 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxf87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dc27n_openshift-marketplace(94ba287c-b444-471f-8be9-e1c553ee251e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.982164 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dc27n" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" Jan 30 10:14:46 crc kubenswrapper[4984]: I0130 10:14:46.735988 4984 generic.go:334] "Generic (PLEG): container finished" podID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerID="fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5" exitCode=0 Jan 30 10:14:46 crc kubenswrapper[4984]: I0130 10:14:46.736116 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerDied","Data":"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5"} Jan 30 10:14:46 crc kubenswrapper[4984]: I0130 10:14:46.741968 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb28ac48-0559-44f0-b620-ad0eae3e3efb","Type":"ContainerStarted","Data":"00adcb88ac65cc04f03ee235cbeb2898a181dfd1a92ff87c6400fa8e0bbbb37e"} Jan 30 10:14:46 crc kubenswrapper[4984]: E0130 10:14:46.744492 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vzmvg" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" Jan 30 10:14:46 crc kubenswrapper[4984]: E0130 10:14:46.744491 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dc27n" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" Jan 30 10:14:46 crc kubenswrapper[4984]: I0130 10:14:46.768560 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=36.768540866 podStartE2EDuration="36.768540866s" podCreationTimestamp="2026-01-30 10:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:46.766028341 +0000 UTC m=+191.332332165" watchObservedRunningTime="2026-01-30 10:14:46.768540866 +0000 UTC m=+191.334844690" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.084882 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 10:14:47 crc kubenswrapper[4984]: E0130 10:14:47.085732 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a839741-51fc-4340-8210-3c29bae228c0" containerName="pruner" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.085759 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a839741-51fc-4340-8210-3c29bae228c0" containerName="pruner" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.085991 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a839741-51fc-4340-8210-3c29bae228c0" containerName="pruner" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.086671 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.094355 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.127216 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.128429 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.230341 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.230433 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.230534 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.259919 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.423155 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.753823 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerStarted","Data":"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b"} Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.762719 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb28ac48-0559-44f0-b620-ad0eae3e3efb","Type":"ContainerDied","Data":"00adcb88ac65cc04f03ee235cbeb2898a181dfd1a92ff87c6400fa8e0bbbb37e"} Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.762541 4984 generic.go:334] "Generic (PLEG): container finished" podID="cb28ac48-0559-44f0-b620-ad0eae3e3efb" containerID="00adcb88ac65cc04f03ee235cbeb2898a181dfd1a92ff87c6400fa8e0bbbb37e" exitCode=0 Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.784598 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b9zx8" podStartSLOduration=2.508570189 podStartE2EDuration="42.784581337s" podCreationTimestamp="2026-01-30 10:14:05 +0000 UTC" firstStartedPulling="2026-01-30 10:14:06.909718556 +0000 UTC m=+151.476022370" lastFinishedPulling="2026-01-30 10:14:47.185729694 +0000 UTC m=+191.752033518" observedRunningTime="2026-01-30 10:14:47.780589201 +0000 UTC m=+192.346893035" watchObservedRunningTime="2026-01-30 10:14:47.784581337 +0000 UTC m=+192.350885161" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.837113 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 10:14:48 crc kubenswrapper[4984]: I0130 10:14:48.770526 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"086e0d49-45d0-4b49-9f2c-19a1863521d0","Type":"ContainerStarted","Data":"640fce3d236a1edd0ddde718a8e4c32e8e166331100d7cc7ae5d57097d7bd06b"} Jan 30 10:14:48 crc kubenswrapper[4984]: I0130 10:14:48.770584 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"086e0d49-45d0-4b49-9f2c-19a1863521d0","Type":"ContainerStarted","Data":"3bcbd0feb40dcbec6dc99fdf79e3898936bbf9ed6f46113ea34ca9ab4e951e60"} Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.125191 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.142558 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.142543541 podStartE2EDuration="2.142543541s" podCreationTimestamp="2026-01-30 10:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:48.793681102 +0000 UTC m=+193.359984936" watchObservedRunningTime="2026-01-30 10:14:49.142543541 +0000 UTC m=+193.708847365" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.254677 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") pod \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.255028 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") pod \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.254819 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cb28ac48-0559-44f0-b620-ad0eae3e3efb" (UID: "cb28ac48-0559-44f0-b620-ad0eae3e3efb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.255404 4984 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.260707 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cb28ac48-0559-44f0-b620-ad0eae3e3efb" (UID: "cb28ac48-0559-44f0-b620-ad0eae3e3efb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.356413 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.775601 4984 generic.go:334] "Generic (PLEG): container finished" podID="086e0d49-45d0-4b49-9f2c-19a1863521d0" containerID="640fce3d236a1edd0ddde718a8e4c32e8e166331100d7cc7ae5d57097d7bd06b" exitCode=0 Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.775639 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"086e0d49-45d0-4b49-9f2c-19a1863521d0","Type":"ContainerDied","Data":"640fce3d236a1edd0ddde718a8e4c32e8e166331100d7cc7ae5d57097d7bd06b"} Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.778084 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb28ac48-0559-44f0-b620-ad0eae3e3efb","Type":"ContainerDied","Data":"dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81"} Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.778110 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.778165 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.034625 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.177575 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") pod \"086e0d49-45d0-4b49-9f2c-19a1863521d0\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.178729 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") pod \"086e0d49-45d0-4b49-9f2c-19a1863521d0\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.179097 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "086e0d49-45d0-4b49-9f2c-19a1863521d0" (UID: "086e0d49-45d0-4b49-9f2c-19a1863521d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.186609 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "086e0d49-45d0-4b49-9f2c-19a1863521d0" (UID: "086e0d49-45d0-4b49-9f2c-19a1863521d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.280119 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.280157 4984 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.789036 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"086e0d49-45d0-4b49-9f2c-19a1863521d0","Type":"ContainerDied","Data":"3bcbd0feb40dcbec6dc99fdf79e3898936bbf9ed6f46113ea34ca9ab4e951e60"} Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.789082 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bcbd0feb40dcbec6dc99fdf79e3898936bbf9ed6f46113ea34ca9ab4e951e60" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.789082 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.870604 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 10:14:51 crc kubenswrapper[4984]: E0130 10:14:51.870892 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086e0d49-45d0-4b49-9f2c-19a1863521d0" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.870924 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="086e0d49-45d0-4b49-9f2c-19a1863521d0" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: E0130 10:14:51.870938 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb28ac48-0559-44f0-b620-ad0eae3e3efb" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.870944 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb28ac48-0559-44f0-b620-ad0eae3e3efb" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.871101 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="086e0d49-45d0-4b49-9f2c-19a1863521d0" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.871120 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb28ac48-0559-44f0-b620-ad0eae3e3efb" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.871616 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.873557 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.873795 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.880209 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.888046 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.888103 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.888212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.988849 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.988977 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.989002 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.989050 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.989051 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:52 crc kubenswrapper[4984]: I0130 10:14:52.007952 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:52 crc kubenswrapper[4984]: I0130 10:14:52.206225 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:52 crc kubenswrapper[4984]: I0130 10:14:52.657772 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 10:14:52 crc kubenswrapper[4984]: W0130 10:14:52.658629 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6f61aac1_18eb_4615_958d_b52a11645afb.slice/crio-2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853 WatchSource:0}: Error finding container 2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853: Status 404 returned error can't find the container with id 2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853 Jan 30 10:14:52 crc kubenswrapper[4984]: I0130 10:14:52.796548 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f61aac1-18eb-4615-958d-b52a11645afb","Type":"ContainerStarted","Data":"2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853"} Jan 30 10:14:53 crc kubenswrapper[4984]: I0130 10:14:53.803512 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f61aac1-18eb-4615-958d-b52a11645afb","Type":"ContainerStarted","Data":"1ece5995ec1cb186ea0589ac48611a00d40c00849c2709d52ee48a8bf55e2079"} Jan 30 10:14:53 crc kubenswrapper[4984]: I0130 10:14:53.819607 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.8195911970000003 podStartE2EDuration="2.819591197s" podCreationTimestamp="2026-01-30 10:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:53.816587565 +0000 UTC m=+198.382891399" watchObservedRunningTime="2026-01-30 10:14:53.819591197 +0000 UTC m=+198.385895011" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.577952 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.579088 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.741025 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.851005 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.967767 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:56 crc kubenswrapper[4984]: I0130 10:14:56.474911 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:14:57 crc kubenswrapper[4984]: I0130 10:14:57.827558 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b9zx8" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="registry-server" containerID="cri-o://f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" gracePeriod=2 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.449663 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.596667 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") pod \"46e81fe4-3beb-448b-955e-c6db37c85e77\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.596762 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") pod \"46e81fe4-3beb-448b-955e-c6db37c85e77\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.596788 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") pod \"46e81fe4-3beb-448b-955e-c6db37c85e77\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.598743 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities" (OuterVolumeSpecName: "utilities") pod "46e81fe4-3beb-448b-955e-c6db37c85e77" (UID: "46e81fe4-3beb-448b-955e-c6db37c85e77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.602005 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8" (OuterVolumeSpecName: "kube-api-access-vh4b8") pod "46e81fe4-3beb-448b-955e-c6db37c85e77" (UID: "46e81fe4-3beb-448b-955e-c6db37c85e77"). InnerVolumeSpecName "kube-api-access-vh4b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.629007 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46e81fe4-3beb-448b-955e-c6db37c85e77" (UID: "46e81fe4-3beb-448b-955e-c6db37c85e77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.698415 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.698452 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.698462 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.832560 4984 generic.go:334] "Generic (PLEG): container finished" podID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerID="4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8" exitCode=0 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.832640 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerDied","Data":"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.835147 4984 generic.go:334] "Generic (PLEG): container finished" podID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerID="0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051" exitCode=0 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.835208 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerDied","Data":"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.836951 4984 generic.go:334] "Generic (PLEG): container finished" podID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerID="f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" exitCode=0 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.836992 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerDied","Data":"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.837054 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerDied","Data":"a250f9ff70a0c171ec7466406231de39859f7a4b3ffd9f714f62126a8f50f17b"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.837014 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.837079 4984 scope.go:117] "RemoveContainer" containerID="f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.838775 4984 generic.go:334] "Generic (PLEG): container finished" podID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerID="e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74" exitCode=0 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.838823 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerDied","Data":"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.919326 4984 scope.go:117] "RemoveContainer" containerID="fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.920813 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.923435 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.946762 4984 scope.go:117] "RemoveContainer" containerID="f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961007 4984 scope.go:117] "RemoveContainer" containerID="f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" Jan 30 10:14:58 crc kubenswrapper[4984]: E0130 10:14:58.961429 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b\": container with ID starting with f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b not found: ID does not exist" containerID="f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961473 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b"} err="failed to get container status \"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b\": rpc error: code = NotFound desc = could not find container \"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b\": container with ID starting with f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b not found: ID does not exist" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961605 4984 scope.go:117] "RemoveContainer" containerID="fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5" Jan 30 10:14:58 crc kubenswrapper[4984]: E0130 10:14:58.961893 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5\": container with ID starting with fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5 not found: ID does not exist" containerID="fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961918 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5"} err="failed to get container status \"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5\": rpc error: code = NotFound desc = could not find container \"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5\": container with ID starting with fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5 not found: ID does not exist" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961933 4984 scope.go:117] "RemoveContainer" containerID="f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943" Jan 30 10:14:58 crc kubenswrapper[4984]: E0130 10:14:58.962614 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943\": container with ID starting with f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943 not found: ID does not exist" containerID="f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.962646 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943"} err="failed to get container status \"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943\": rpc error: code = NotFound desc = could not find container \"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943\": container with ID starting with f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943 not found: ID does not exist" Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.846493 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerStarted","Data":"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.847883 4984 generic.go:334] "Generic (PLEG): container finished" podID="b628557d-490d-4803-8ae3-fde88678c6a4" containerID="501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6" exitCode=0 Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.847990 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerDied","Data":"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.849682 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerStarted","Data":"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.853200 4984 generic.go:334] "Generic (PLEG): container finished" podID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerID="1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab" exitCode=0 Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.853325 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerDied","Data":"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.857768 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerStarted","Data":"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.868638 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9vv7r" podStartSLOduration=2.347205198 podStartE2EDuration="55.868615083s" podCreationTimestamp="2026-01-30 10:14:04 +0000 UTC" firstStartedPulling="2026-01-30 10:14:05.81071372 +0000 UTC m=+150.377017544" lastFinishedPulling="2026-01-30 10:14:59.332123605 +0000 UTC m=+203.898427429" observedRunningTime="2026-01-30 10:14:59.867169184 +0000 UTC m=+204.433473008" watchObservedRunningTime="2026-01-30 10:14:59.868615083 +0000 UTC m=+204.434918917" Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.903391 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dk77x" podStartSLOduration=3.366592599 podStartE2EDuration="56.903374313s" podCreationTimestamp="2026-01-30 10:14:03 +0000 UTC" firstStartedPulling="2026-01-30 10:14:05.84400522 +0000 UTC m=+150.410309044" lastFinishedPulling="2026-01-30 10:14:59.380786934 +0000 UTC m=+203.947090758" observedRunningTime="2026-01-30 10:14:59.885745081 +0000 UTC m=+204.452048905" watchObservedRunningTime="2026-01-30 10:14:59.903374313 +0000 UTC m=+204.469678137" Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.940642 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8cnkg" podStartSLOduration=3.3656946899999998 podStartE2EDuration="57.940628151s" podCreationTimestamp="2026-01-30 10:14:02 +0000 UTC" firstStartedPulling="2026-01-30 10:14:04.726273559 +0000 UTC m=+149.292577383" lastFinishedPulling="2026-01-30 10:14:59.30120702 +0000 UTC m=+203.867510844" observedRunningTime="2026-01-30 10:14:59.939288634 +0000 UTC m=+204.505592458" watchObservedRunningTime="2026-01-30 10:14:59.940628151 +0000 UTC m=+204.506931975" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.097026 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" path="/var/lib/kubelet/pods/46e81fe4-3beb-448b-955e-c6db37c85e77/volumes" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137046 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 10:15:00 crc kubenswrapper[4984]: E0130 10:15:00.137424 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="extract-utilities" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137446 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="extract-utilities" Jan 30 10:15:00 crc kubenswrapper[4984]: E0130 10:15:00.137464 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="extract-content" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137473 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="extract-content" Jan 30 10:15:00 crc kubenswrapper[4984]: E0130 10:15:00.137490 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="registry-server" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137498 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="registry-server" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137710 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="registry-server" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.138205 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.140054 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.140054 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.147548 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.317145 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.317194 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.317238 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.418188 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.418261 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.418287 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.419234 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.425953 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.438390 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.452664 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.689923 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.865081 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" event={"ID":"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9","Type":"ContainerStarted","Data":"626343e1690b32284633537d7a0abbeeacd79d429e95b363b4efee829760178b"} Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.865130 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" event={"ID":"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9","Type":"ContainerStarted","Data":"b89ef05138b4f7b740fca5d924b385306f0c0d92ee704c96db15d26be98e3344"} Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.867681 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerStarted","Data":"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e"} Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.869986 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerStarted","Data":"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc"} Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.883550 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" podStartSLOduration=0.883531894 podStartE2EDuration="883.531894ms" podCreationTimestamp="2026-01-30 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:15:00.882892876 +0000 UTC m=+205.449196710" watchObservedRunningTime="2026-01-30 10:15:00.883531894 +0000 UTC m=+205.449835718" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.902140 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vzmvg" podStartSLOduration=2.5689887909999998 podStartE2EDuration="54.902119142s" podCreationTimestamp="2026-01-30 10:14:06 +0000 UTC" firstStartedPulling="2026-01-30 10:14:07.952951739 +0000 UTC m=+152.519255563" lastFinishedPulling="2026-01-30 10:15:00.28608208 +0000 UTC m=+204.852385914" observedRunningTime="2026-01-30 10:15:00.899095769 +0000 UTC m=+205.465399593" watchObservedRunningTime="2026-01-30 10:15:00.902119142 +0000 UTC m=+205.468422976" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.918996 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w4cgz" podStartSLOduration=4.533354052 podStartE2EDuration="58.918979572s" podCreationTimestamp="2026-01-30 10:14:02 +0000 UTC" firstStartedPulling="2026-01-30 10:14:05.834549379 +0000 UTC m=+150.400853203" lastFinishedPulling="2026-01-30 10:15:00.220174899 +0000 UTC m=+204.786478723" observedRunningTime="2026-01-30 10:15:00.916104734 +0000 UTC m=+205.482408568" watchObservedRunningTime="2026-01-30 10:15:00.918979572 +0000 UTC m=+205.485283406" Jan 30 10:15:01 crc kubenswrapper[4984]: I0130 10:15:01.883824 4984 generic.go:334] "Generic (PLEG): container finished" podID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" containerID="626343e1690b32284633537d7a0abbeeacd79d429e95b363b4efee829760178b" exitCode=0 Jan 30 10:15:01 crc kubenswrapper[4984]: I0130 10:15:01.883927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" event={"ID":"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9","Type":"ContainerDied","Data":"626343e1690b32284633537d7a0abbeeacd79d429e95b363b4efee829760178b"} Jan 30 10:15:02 crc kubenswrapper[4984]: I0130 10:15:02.892204 4984 generic.go:334] "Generic (PLEG): container finished" podID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerID="ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9" exitCode=0 Jan 30 10:15:02 crc kubenswrapper[4984]: I0130 10:15:02.892232 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerDied","Data":"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9"} Jan 30 10:15:02 crc kubenswrapper[4984]: I0130 10:15:02.895834 4984 generic.go:334] "Generic (PLEG): container finished" podID="94ba287c-b444-471f-8be9-e1c553ee251e" containerID="d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5" exitCode=0 Jan 30 10:15:02 crc kubenswrapper[4984]: I0130 10:15:02.895919 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerDied","Data":"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5"} Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.000784 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.000851 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.000903 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.001484 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.001546 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e" gracePeriod=600 Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.156848 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.156905 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.179865 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.195977 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.354553 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") pod \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.355169 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") pod \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.355355 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") pod \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.356086 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" (UID: "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.359186 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" (UID: "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.359386 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4" (OuterVolumeSpecName: "kube-api-access-pq7r4") pod "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" (UID: "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9"). InnerVolumeSpecName "kube-api-access-pq7r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.456757 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.456802 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.456818 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.548265 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.548312 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.625004 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.681186 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.681984 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.734767 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.903022 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e" exitCode=0 Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.903102 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e"} Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.903142 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761"} Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.905114 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" event={"ID":"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9","Type":"ContainerDied","Data":"b89ef05138b4f7b740fca5d924b385306f0c0d92ee704c96db15d26be98e3344"} Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.905167 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b89ef05138b4f7b740fca5d924b385306f0c0d92ee704c96db15d26be98e3344" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.905718 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:05 crc kubenswrapper[4984]: I0130 10:15:05.260576 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:15:05 crc kubenswrapper[4984]: I0130 10:15:05.262392 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:15:05 crc kubenswrapper[4984]: I0130 10:15:05.315513 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:15:05 crc kubenswrapper[4984]: I0130 10:15:05.954065 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.921451 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerStarted","Data":"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc"} Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.922804 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.922841 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.923017 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerStarted","Data":"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d"} Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.940625 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-njw8t" podStartSLOduration=3.952958945 podStartE2EDuration="1m3.94060594s" podCreationTimestamp="2026-01-30 10:14:03 +0000 UTC" firstStartedPulling="2026-01-30 10:14:05.813036929 +0000 UTC m=+150.379340753" lastFinishedPulling="2026-01-30 10:15:05.800683924 +0000 UTC m=+210.366987748" observedRunningTime="2026-01-30 10:15:06.93839967 +0000 UTC m=+211.504703484" watchObservedRunningTime="2026-01-30 10:15:06.94060594 +0000 UTC m=+211.506909764" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.962079 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dc27n" podStartSLOduration=3.167813278 podStartE2EDuration="1m0.962063136s" podCreationTimestamp="2026-01-30 10:14:06 +0000 UTC" firstStartedPulling="2026-01-30 10:14:08.026425932 +0000 UTC m=+152.592729756" lastFinishedPulling="2026-01-30 10:15:05.82067577 +0000 UTC m=+210.386979614" observedRunningTime="2026-01-30 10:15:06.959545047 +0000 UTC m=+211.525848871" watchObservedRunningTime="2026-01-30 10:15:06.962063136 +0000 UTC m=+211.528366960" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.966708 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:07 crc kubenswrapper[4984]: I0130 10:15:07.968440 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:10 crc kubenswrapper[4984]: I0130 10:15:10.369480 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:15:10 crc kubenswrapper[4984]: I0130 10:15:10.370133 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vzmvg" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="registry-server" containerID="cri-o://5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" gracePeriod=2 Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.756489 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.870385 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") pod \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.870522 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") pod \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.870561 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") pod \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.871587 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities" (OuterVolumeSpecName: "utilities") pod "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" (UID: "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.882507 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t" (OuterVolumeSpecName: "kube-api-access-z8p7t") pod "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" (UID: "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe"). InnerVolumeSpecName "kube-api-access-z8p7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.953779 4984 generic.go:334] "Generic (PLEG): container finished" podID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerID="5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" exitCode=0 Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.953849 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerDied","Data":"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e"} Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.953896 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerDied","Data":"885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455"} Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.953927 4984 scope.go:117] "RemoveContainer" containerID="5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.954304 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.972732 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.972776 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.976217 4984 scope.go:117] "RemoveContainer" containerID="1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.995105 4984 scope.go:117] "RemoveContainer" containerID="3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.018107 4984 scope.go:117] "RemoveContainer" containerID="5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" Jan 30 10:15:12 crc kubenswrapper[4984]: E0130 10:15:12.018663 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e\": container with ID starting with 5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e not found: ID does not exist" containerID="5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.018716 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e"} err="failed to get container status \"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e\": rpc error: code = NotFound desc = could not find container \"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e\": container with ID starting with 5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e not found: ID does not exist" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.018743 4984 scope.go:117] "RemoveContainer" containerID="1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab" Jan 30 10:15:12 crc kubenswrapper[4984]: E0130 10:15:12.019092 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab\": container with ID starting with 1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab not found: ID does not exist" containerID="1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.019124 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab"} err="failed to get container status \"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab\": rpc error: code = NotFound desc = could not find container \"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab\": container with ID starting with 1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab not found: ID does not exist" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.019145 4984 scope.go:117] "RemoveContainer" containerID="3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6" Jan 30 10:15:12 crc kubenswrapper[4984]: E0130 10:15:12.019451 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6\": container with ID starting with 3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6 not found: ID does not exist" containerID="3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.019487 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6"} err="failed to get container status \"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6\": rpc error: code = NotFound desc = could not find container \"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6\": container with ID starting with 3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6 not found: ID does not exist" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.035599 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" (UID: "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.085986 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.197951 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.201180 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.211034 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.599286 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.731000 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.737885 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.737929 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.799655 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:14 crc kubenswrapper[4984]: I0130 10:15:14.032442 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:14 crc kubenswrapper[4984]: I0130 10:15:14.100396 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" path="/var/lib/kubelet/pods/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe/volumes" Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.572433 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.573023 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dk77x" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="registry-server" containerID="cri-o://1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" gracePeriod=2 Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.968104 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990054 4984 generic.go:334] "Generic (PLEG): container finished" podID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerID="1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" exitCode=0 Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerDied","Data":"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b"} Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990159 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerDied","Data":"d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2"} Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990183 4984 scope.go:117] "RemoveContainer" containerID="1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990410 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.008317 4984 scope.go:117] "RemoveContainer" containerID="0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.024544 4984 scope.go:117] "RemoveContainer" containerID="de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.045971 4984 scope.go:117] "RemoveContainer" containerID="1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" Jan 30 10:15:16 crc kubenswrapper[4984]: E0130 10:15:16.046487 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b\": container with ID starting with 1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b not found: ID does not exist" containerID="1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.046637 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b"} err="failed to get container status \"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b\": rpc error: code = NotFound desc = could not find container \"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b\": container with ID starting with 1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b not found: ID does not exist" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.046685 4984 scope.go:117] "RemoveContainer" containerID="0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051" Jan 30 10:15:16 crc kubenswrapper[4984]: E0130 10:15:16.046993 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051\": container with ID starting with 0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051 not found: ID does not exist" containerID="0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.047024 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051"} err="failed to get container status \"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051\": rpc error: code = NotFound desc = could not find container \"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051\": container with ID starting with 0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051 not found: ID does not exist" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.047038 4984 scope.go:117] "RemoveContainer" containerID="de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7" Jan 30 10:15:16 crc kubenswrapper[4984]: E0130 10:15:16.047327 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7\": container with ID starting with de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7 not found: ID does not exist" containerID="de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.047371 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7"} err="failed to get container status \"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7\": rpc error: code = NotFound desc = could not find container \"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7\": container with ID starting with de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7 not found: ID does not exist" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.127517 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") pod \"874a87b2-c81a-4ce9-85c6-c41d18835f35\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.127566 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") pod \"874a87b2-c81a-4ce9-85c6-c41d18835f35\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.127743 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") pod \"874a87b2-c81a-4ce9-85c6-c41d18835f35\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.128963 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities" (OuterVolumeSpecName: "utilities") pod "874a87b2-c81a-4ce9-85c6-c41d18835f35" (UID: "874a87b2-c81a-4ce9-85c6-c41d18835f35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.134204 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4" (OuterVolumeSpecName: "kube-api-access-6b9h4") pod "874a87b2-c81a-4ce9-85c6-c41d18835f35" (UID: "874a87b2-c81a-4ce9-85c6-c41d18835f35"). InnerVolumeSpecName "kube-api-access-6b9h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.173158 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.173491 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-njw8t" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="registry-server" containerID="cri-o://d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" gracePeriod=2 Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.208906 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "874a87b2-c81a-4ce9-85c6-c41d18835f35" (UID: "874a87b2-c81a-4ce9-85c6-c41d18835f35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.230305 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.230363 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.230424 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.329418 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.333290 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.536882 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.585142 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.585294 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.635189 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") pod \"33689f3c-1867-4707-a8c2-ed56c467cff6\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.635297 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") pod \"33689f3c-1867-4707-a8c2-ed56c467cff6\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.635402 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") pod \"33689f3c-1867-4707-a8c2-ed56c467cff6\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.636751 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities" (OuterVolumeSpecName: "utilities") pod "33689f3c-1867-4707-a8c2-ed56c467cff6" (UID: "33689f3c-1867-4707-a8c2-ed56c467cff6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.637078 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.645708 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k" (OuterVolumeSpecName: "kube-api-access-9fd8k") pod "33689f3c-1867-4707-a8c2-ed56c467cff6" (UID: "33689f3c-1867-4707-a8c2-ed56c467cff6"). InnerVolumeSpecName "kube-api-access-9fd8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.707349 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33689f3c-1867-4707-a8c2-ed56c467cff6" (UID: "33689f3c-1867-4707-a8c2-ed56c467cff6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.740011 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.740074 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.740088 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007480 4984 generic.go:334] "Generic (PLEG): container finished" podID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerID="d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" exitCode=0 Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007624 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007711 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerDied","Data":"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc"} Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007767 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerDied","Data":"b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b"} Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007803 4984 scope.go:117] "RemoveContainer" containerID="d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.042535 4984 scope.go:117] "RemoveContainer" containerID="ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.054193 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.057729 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.069576 4984 scope.go:117] "RemoveContainer" containerID="b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.070564 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.090441 4984 scope.go:117] "RemoveContainer" containerID="d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" Jan 30 10:15:17 crc kubenswrapper[4984]: E0130 10:15:17.094619 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc\": container with ID starting with d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc not found: ID does not exist" containerID="d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.094760 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc"} err="failed to get container status \"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc\": rpc error: code = NotFound desc = could not find container \"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc\": container with ID starting with d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc not found: ID does not exist" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.094803 4984 scope.go:117] "RemoveContainer" containerID="ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9" Jan 30 10:15:17 crc kubenswrapper[4984]: E0130 10:15:17.097283 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9\": container with ID starting with ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9 not found: ID does not exist" containerID="ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.097343 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9"} err="failed to get container status \"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9\": rpc error: code = NotFound desc = could not find container \"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9\": container with ID starting with ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9 not found: ID does not exist" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.097370 4984 scope.go:117] "RemoveContainer" containerID="b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26" Jan 30 10:15:17 crc kubenswrapper[4984]: E0130 10:15:17.098943 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26\": container with ID starting with b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26 not found: ID does not exist" containerID="b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.099230 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26"} err="failed to get container status \"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26\": rpc error: code = NotFound desc = could not find container \"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26\": container with ID starting with b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26 not found: ID does not exist" Jan 30 10:15:18 crc kubenswrapper[4984]: I0130 10:15:18.101870 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" path="/var/lib/kubelet/pods/33689f3c-1867-4707-a8c2-ed56c467cff6/volumes" Jan 30 10:15:18 crc kubenswrapper[4984]: I0130 10:15:18.104211 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" path="/var/lib/kubelet/pods/874a87b2-c81a-4ce9-85c6-c41d18835f35/volumes" Jan 30 10:15:21 crc kubenswrapper[4984]: I0130 10:15:21.499308 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" containerID="cri-o://cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" gracePeriod=15 Jan 30 10:15:21 crc kubenswrapper[4984]: I0130 10:15:21.912477 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019005 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019045 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019080 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019114 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019133 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019152 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019191 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019213 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019228 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019581 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020157 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020214 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020313 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020314 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020348 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020372 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020407 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020428 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020619 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020631 4984 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020639 4984 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020648 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020966 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.025957 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.026040 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9" (OuterVolumeSpecName: "kube-api-access-znjx9") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "kube-api-access-znjx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.026910 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.027234 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.028939 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.029237 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.033667 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035554 4984 generic.go:334] "Generic (PLEG): container finished" podID="b78342ea-bd31-48b3-b052-638da558730c" containerID="cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" exitCode=0 Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035592 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" event={"ID":"b78342ea-bd31-48b3-b052-638da558730c","Type":"ContainerDied","Data":"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61"} Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035617 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" event={"ID":"b78342ea-bd31-48b3-b052-638da558730c","Type":"ContainerDied","Data":"22894fd3f7185098bfb82595039c231f2f5583d91c055ff95ffbf8f516afcd2e"} Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035634 4984 scope.go:117] "RemoveContainer" containerID="cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035731 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.038551 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.040461 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.091852 4984 scope.go:117] "RemoveContainer" containerID="cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" Jan 30 10:15:22 crc kubenswrapper[4984]: E0130 10:15:22.092583 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61\": container with ID starting with cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61 not found: ID does not exist" containerID="cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.092985 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61"} err="failed to get container status \"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61\": rpc error: code = NotFound desc = could not find container \"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61\": container with ID starting with cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61 not found: ID does not exist" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121691 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121723 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121733 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121743 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121752 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121761 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121773 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121783 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121791 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121799 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.350824 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.358515 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:15:24 crc kubenswrapper[4984]: I0130 10:15:24.096076 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78342ea-bd31-48b3-b052-638da558730c" path="/var/lib/kubelet/pods/b78342ea-bd31-48b3-b052-638da558730c/volumes" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.380557 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5969b76fdc-qf4wv"] Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381711 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381742 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381766 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381782 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381807 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381824 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381850 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381864 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381884 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381901 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381920 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381935 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381955 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381970 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381994 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382010 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.382026 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382042 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.382059 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" containerName="collect-profiles" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382074 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" containerName="collect-profiles" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.382112 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382128 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382380 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382412 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382444 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382463 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" containerName="collect-profiles" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382487 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.383205 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.385795 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.385929 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.385956 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.392093 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.392368 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.392603 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.393040 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.393748 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.394689 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.395227 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.399546 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5969b76fdc-qf4wv"] Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.400281 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.400330 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.406506 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.419437 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.429632 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436046 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-dir\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436096 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436117 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436159 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-login\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436176 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-policies\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436200 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436263 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-session\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436296 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436316 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v75xw\" (UniqueName: \"kubernetes.io/projected/32ac01b6-bb42-436f-bddf-fb35fbeff725-kube-api-access-v75xw\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436355 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-router-certs\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436380 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-error\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436396 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436476 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-service-ca\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537500 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-router-certs\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537609 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-error\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537662 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537711 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-service-ca\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537753 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-dir\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537803 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537843 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537892 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537927 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-login\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537967 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-policies\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538035 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538095 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-session\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538204 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v75xw\" (UniqueName: \"kubernetes.io/projected/32ac01b6-bb42-436f-bddf-fb35fbeff725-kube-api-access-v75xw\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538482 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-dir\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.539357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.539406 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-service-ca\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.540981 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-policies\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.541901 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.545090 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.545693 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.545764 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.546074 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-login\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.546557 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-session\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.547172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.552668 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-error\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.553680 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-router-certs\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.559571 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v75xw\" (UniqueName: \"kubernetes.io/projected/32ac01b6-bb42-436f-bddf-fb35fbeff725-kube-api-access-v75xw\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.703834 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.807808 4984 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.808929 4984 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.808961 4984 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809085 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809165 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809179 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809191 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809200 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809214 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809240 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809272 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809281 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809292 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809301 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809338 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809347 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809359 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809370 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809380 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809409 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809553 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809722 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809777 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809857 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809924 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809903 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809950 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810046 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810068 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810092 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810114 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810129 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.816170 4984 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.843663 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.843905 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.843987 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844016 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844129 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844167 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844231 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844301 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.948837 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.948965 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.949944 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950058 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950123 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950167 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950188 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950198 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950263 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950309 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950385 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950443 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950489 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.100280 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.102724 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104232 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" exitCode=0 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104315 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" exitCode=0 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104358 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104336 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" exitCode=0 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104397 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" exitCode=2 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.107121 4984 generic.go:334] "Generic (PLEG): container finished" podID="6f61aac1-18eb-4615-958d-b52a11645afb" containerID="1ece5995ec1cb186ea0589ac48611a00d40c00849c2709d52ee48a8bf55e2079" exitCode=0 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.107170 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f61aac1-18eb-4615-958d-b52a11645afb","Type":"ContainerDied","Data":"1ece5995ec1cb186ea0589ac48611a00d40c00849c2709d52ee48a8bf55e2079"} Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.108017 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.467231 4984 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 10:15:31 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:31 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:31 crc kubenswrapper[4984]: > Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.467432 4984 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 10:15:31 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:31 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:31 crc kubenswrapper[4984]: > pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.467506 4984 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 10:15:31 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:31 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:31 crc kubenswrapper[4984]: > pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.467635 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb\\\" Netns:\\\"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s\\\": dial tcp 38.102.83.169:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.468422 4984 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event=< Jan 30 10:15:31 crc kubenswrapper[4984]: &Event{ObjectMeta:{oauth-openshift-5969b76fdc-qf4wv.188f7abd2aedd9df openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-5969b76fdc-qf4wv,UID:32ac01b6-bb42-436f-bddf-fb35fbeff725,APIVersion:v1,ResourceVersion:29552,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:31 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,LastTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 10:15:31 crc kubenswrapper[4984]: > Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.121004 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.122415 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.123309 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.162787 4984 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" volumeName="registry-storage" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.446710 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.447874 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473322 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") pod \"6f61aac1-18eb-4615-958d-b52a11645afb\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473397 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") pod \"6f61aac1-18eb-4615-958d-b52a11645afb\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473394 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock" (OuterVolumeSpecName: "var-lock") pod "6f61aac1-18eb-4615-958d-b52a11645afb" (UID: "6f61aac1-18eb-4615-958d-b52a11645afb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473424 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") pod \"6f61aac1-18eb-4615-958d-b52a11645afb\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473448 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6f61aac1-18eb-4615-958d-b52a11645afb" (UID: "6f61aac1-18eb-4615-958d-b52a11645afb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473717 4984 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473727 4984 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.481465 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6f61aac1-18eb-4615-958d-b52a11645afb" (UID: "6f61aac1-18eb-4615-958d-b52a11645afb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.575926 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.848126 4984 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 10:15:32 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88" Netns:"/var/run/netns/947bfe5a-7595-451b-b5d0-3903e0cd1bbb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:32 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:32 crc kubenswrapper[4984]: > Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.848230 4984 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 10:15:32 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88" Netns:"/var/run/netns/947bfe5a-7595-451b-b5d0-3903e0cd1bbb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:32 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:32 crc kubenswrapper[4984]: > pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.848331 4984 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 10:15:32 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88" Netns:"/var/run/netns/947bfe5a-7595-451b-b5d0-3903e0cd1bbb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:32 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:32 crc kubenswrapper[4984]: > pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.848445 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88\\\" Netns:\\\"/var/run/netns/947bfe5a-7595-451b-b5d0-3903e0cd1bbb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s\\\": dial tcp 38.102.83.169:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.133987 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f61aac1-18eb-4615-958d-b52a11645afb","Type":"ContainerDied","Data":"2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853"} Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.134346 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.134126 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.173766 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.180365 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.181779 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.182490 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.182933 4984 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.295502 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.295617 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.295658 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.296102 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.296163 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.296161 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.397882 4984 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.398157 4984 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.398279 4984 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:33 crc kubenswrapper[4984]: E0130 10:15:33.834678 4984 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event=< Jan 30 10:15:33 crc kubenswrapper[4984]: &Event{ObjectMeta:{oauth-openshift-5969b76fdc-qf4wv.188f7abd2aedd9df openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-5969b76fdc-qf4wv,UID:32ac01b6-bb42-436f-bddf-fb35fbeff725,APIVersion:v1,ResourceVersion:29552,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:33 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,LastTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 10:15:33 crc kubenswrapper[4984]: > Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.102125 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.143850 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.145010 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" exitCode=0 Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.145102 4984 scope.go:117] "RemoveContainer" containerID="7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.145156 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.146637 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.147299 4984 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.150212 4984 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.150604 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.161900 4984 scope.go:117] "RemoveContainer" containerID="9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.177463 4984 scope.go:117] "RemoveContainer" containerID="77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.197100 4984 scope.go:117] "RemoveContainer" containerID="969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.213236 4984 scope.go:117] "RemoveContainer" containerID="73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.230080 4984 scope.go:117] "RemoveContainer" containerID="f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.252447 4984 scope.go:117] "RemoveContainer" containerID="7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.254936 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\": container with ID starting with 7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733 not found: ID does not exist" containerID="7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.254975 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733"} err="failed to get container status \"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\": rpc error: code = NotFound desc = could not find container \"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\": container with ID starting with 7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733 not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255003 4984 scope.go:117] "RemoveContainer" containerID="9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.255472 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\": container with ID starting with 9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0 not found: ID does not exist" containerID="9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255512 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0"} err="failed to get container status \"9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\": rpc error: code = NotFound desc = could not find container \"9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\": container with ID starting with 9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0 not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255537 4984 scope.go:117] "RemoveContainer" containerID="77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.255917 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\": container with ID starting with 77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b not found: ID does not exist" containerID="77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255950 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b"} err="failed to get container status \"77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\": rpc error: code = NotFound desc = could not find container \"77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\": container with ID starting with 77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255970 4984 scope.go:117] "RemoveContainer" containerID="969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.256382 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\": container with ID starting with 969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9 not found: ID does not exist" containerID="969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.256435 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9"} err="failed to get container status \"969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\": rpc error: code = NotFound desc = could not find container \"969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\": container with ID starting with 969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9 not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.256462 4984 scope.go:117] "RemoveContainer" containerID="73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.257017 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\": container with ID starting with 73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434 not found: ID does not exist" containerID="73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.257047 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434"} err="failed to get container status \"73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\": rpc error: code = NotFound desc = could not find container \"73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\": container with ID starting with 73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434 not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.257065 4984 scope.go:117] "RemoveContainer" containerID="f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.257345 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\": container with ID starting with f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb not found: ID does not exist" containerID="f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.257366 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb"} err="failed to get container status \"f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\": rpc error: code = NotFound desc = could not find container \"f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\": container with ID starting with f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb not found: ID does not exist" Jan 30 10:15:35 crc kubenswrapper[4984]: E0130 10:15:35.841849 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:35 crc kubenswrapper[4984]: I0130 10:15:35.842493 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:35 crc kubenswrapper[4984]: W0130 10:15:35.885986 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d89100c9f175973942392d2ebd25e3a9a9602b0676402e1260cbee825faef2ea WatchSource:0}: Error finding container d89100c9f175973942392d2ebd25e3a9a9602b0676402e1260cbee825faef2ea: Status 404 returned error can't find the container with id d89100c9f175973942392d2ebd25e3a9a9602b0676402e1260cbee825faef2ea Jan 30 10:15:36 crc kubenswrapper[4984]: I0130 10:15:36.092195 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: I0130 10:15:36.093367 4984 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: I0130 10:15:36.156406 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d89100c9f175973942392d2ebd25e3a9a9602b0676402e1260cbee825faef2ea"} Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.478958 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.479410 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.479742 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.480220 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.480697 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: I0130 10:15:36.480744 4984 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.481185 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.682138 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Jan 30 10:15:37 crc kubenswrapper[4984]: E0130 10:15:37.083297 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Jan 30 10:15:37 crc kubenswrapper[4984]: I0130 10:15:37.162243 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124"} Jan 30 10:15:37 crc kubenswrapper[4984]: I0130 10:15:37.163560 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:37 crc kubenswrapper[4984]: E0130 10:15:37.163583 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:37 crc kubenswrapper[4984]: E0130 10:15:37.885289 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Jan 30 10:15:38 crc kubenswrapper[4984]: E0130 10:15:38.171383 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:39 crc kubenswrapper[4984]: E0130 10:15:39.486469 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Jan 30 10:15:42 crc kubenswrapper[4984]: E0130 10:15:42.688454 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="6.4s" Jan 30 10:15:43 crc kubenswrapper[4984]: E0130 10:15:43.835931 4984 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event=< Jan 30 10:15:43 crc kubenswrapper[4984]: &Event{ObjectMeta:{oauth-openshift-5969b76fdc-qf4wv.188f7abd2aedd9df openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-5969b76fdc-qf4wv,UID:32ac01b6-bb42-436f-bddf-fb35fbeff725,APIVersion:v1,ResourceVersion:29552,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:43 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,LastTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 10:15:43 crc kubenswrapper[4984]: > Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.089850 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.090730 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.104718 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.104756 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:44 crc kubenswrapper[4984]: E0130 10:15:44.105296 4984 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.105855 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.215663 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d307620d0884870a6e672247167a1f63c4395bee47bcf37596ddd47ebdccb2cd"} Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.231809 4984 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5945a1207df9f18a832f4361622a41a77f812af5137b5d1bece2b50590183fd5" exitCode=0 Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.231948 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5945a1207df9f18a832f4361622a41a77f812af5137b5d1bece2b50590183fd5"} Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.232476 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.232513 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.232812 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:45 crc kubenswrapper[4984]: E0130 10:15:45.233197 4984 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.235698 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.235778 4984 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862" exitCode=1 Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.235826 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862"} Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.236486 4984 scope.go:117] "RemoveContainer" containerID="7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.236737 4984 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.237168 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.249009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"382501c4a9401198b12f43f1c9871b331a3acaf16059b28153538e52464a9877"} Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.249346 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"433924f8dbe65ee693927f99578aed74afe176943f52ca2997a63c77fce15fcd"} Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.249358 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9cdf289a1a4dc8784fd66d006c9a99bdd1ae05d195d7ab3badf10f5ba3600bc3"} Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.249367 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6617ef6221db6ef99ee4d32c9c576b8835c442339f3a57e01f579889db682bee"} Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.261657 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.261718 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"386ac0e2da549d47f83372e76dbab9d7655bd295ed742327a255bc5959332337"} Jan 30 10:15:47 crc kubenswrapper[4984]: I0130 10:15:47.270331 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"35d2d9606fba5f728c664118be2207d529b11e6f6b587f0f7e1e022706dd6c71"} Jan 30 10:15:47 crc kubenswrapper[4984]: I0130 10:15:47.270668 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:47 crc kubenswrapper[4984]: I0130 10:15:47.270684 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:47 crc kubenswrapper[4984]: I0130 10:15:47.270882 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.089641 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.090163 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.250087 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.254152 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.275568 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.106694 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.106971 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.112384 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.281942 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerStarted","Data":"3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a"} Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.282389 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerStarted","Data":"b12452f91f4422c135c699a751411ee22c9b5c803805d3b922fa747b5824deb1"} Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.282757 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.286473 4984 patch_prober.go:28] interesting pod/oauth-openshift-5969b76fdc-qf4wv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" start-of-body= Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.286531 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.292199 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/0.log" Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.292299 4984 generic.go:334] "Generic (PLEG): container finished" podID="32ac01b6-bb42-436f-bddf-fb35fbeff725" containerID="3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a" exitCode=255 Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.292347 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerDied","Data":"3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a"} Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.293748 4984 scope.go:117] "RemoveContainer" containerID="3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a" Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.704394 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.300101 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/1.log" Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301148 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/0.log" Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301185 4984 generic.go:334] "Generic (PLEG): container finished" podID="32ac01b6-bb42-436f-bddf-fb35fbeff725" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" exitCode=255 Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301211 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerDied","Data":"b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644"} Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301242 4984 scope.go:117] "RemoveContainer" containerID="3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a" Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301679 4984 scope.go:117] "RemoveContainer" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" Jan 30 10:15:51 crc kubenswrapper[4984]: E0130 10:15:51.301943 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.280754 4984 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.307684 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/1.log" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.308153 4984 scope.go:117] "RemoveContainer" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" Jan 30 10:15:52 crc kubenswrapper[4984]: E0130 10:15:52.308339 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.308760 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.308853 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.312803 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.331531 4984 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="68f6fa21-dccc-4f13-a98b-983b556e4c18" Jan 30 10:15:53 crc kubenswrapper[4984]: I0130 10:15:53.315033 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:53 crc kubenswrapper[4984]: I0130 10:15:53.315082 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:56 crc kubenswrapper[4984]: I0130 10:15:56.108567 4984 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="68f6fa21-dccc-4f13-a98b-983b556e4c18" Jan 30 10:16:00 crc kubenswrapper[4984]: I0130 10:16:00.704224 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:16:00 crc kubenswrapper[4984]: I0130 10:16:00.704962 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:16:00 crc kubenswrapper[4984]: I0130 10:16:00.705812 4984 scope.go:117] "RemoveContainer" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" Jan 30 10:16:00 crc kubenswrapper[4984]: E0130 10:16:00.706194 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:16:02 crc kubenswrapper[4984]: I0130 10:16:02.541859 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 10:16:02 crc kubenswrapper[4984]: I0130 10:16:02.567633 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 10:16:02 crc kubenswrapper[4984]: I0130 10:16:02.817549 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:16:02 crc kubenswrapper[4984]: I0130 10:16:02.863897 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.235895 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.263786 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.526353 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.575589 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.731042 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.745667 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.853159 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.938131 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.492846 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.679751 4984 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.819840 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.937043 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.939933 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.975521 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.977235 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.996333 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.033878 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.198508 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.201028 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.247827 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.306096 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.353823 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.374716 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.468775 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.478272 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.499340 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.527771 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.547129 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.657755 4984 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.754716 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.822909 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.949407 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.103091 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.121855 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.148722 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.197174 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.201086 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.330157 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.370572 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.381332 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.540108 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.559039 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.580476 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.596237 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.621907 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.683499 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.688490 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.801300 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.827940 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.868196 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.876212 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.900469 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.933569 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.941747 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.021178 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.096467 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.108427 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.109622 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.211203 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.341866 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.439951 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.498295 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.624855 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.695619 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.734119 4984 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.796103 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.870216 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.947627 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.987425 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.003803 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.176716 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.213889 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.239772 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.317639 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.351727 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.371029 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.371038 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.378885 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.386062 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.410164 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.431567 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.473634 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.483816 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.503666 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.616622 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.885958 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.921244 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.147951 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.278512 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.386334 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.472121 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.558364 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.587598 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.642681 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.645033 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.871060 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.925445 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.956621 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.979400 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.989026 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.014663 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.132959 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.200508 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.232493 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.289924 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.304902 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.308984 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.348828 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.358032 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.403011 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.431182 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.466648 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.473717 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.608168 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.743873 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.803452 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.837214 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.853201 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.854658 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.883584 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.950373 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.999207 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.090364 4984 scope.go:117] "RemoveContainer" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.108722 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.286458 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.302860 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.309661 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.419823 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.433888 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/1.log" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.433951 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerStarted","Data":"65c333ae11e5607a3622044f3dd1aa65c8b84cfc261be44144dddfe71d8679ab"} Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.434300 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.442261 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.517397 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.580381 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.590392 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.603490 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.615169 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.685826 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.713033 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.764109 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.775018 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.777466 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.782113 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.791999 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.805217 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.849803 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.851915 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.909784 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.979842 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.024493 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.031415 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.056802 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.103537 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.127540 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.241122 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.267199 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.297044 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.319113 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.380372 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.423557 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.525006 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.540989 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.573668 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.643922 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.687361 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.711679 4984 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.712076 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podStartSLOduration=76.71205666 podStartE2EDuration="1m16.71205666s" podCreationTimestamp="2026-01-30 10:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:16:11.471269838 +0000 UTC m=+276.037573662" watchObservedRunningTime="2026-01-30 10:16:12.71205666 +0000 UTC m=+277.278360484" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.715977 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.716025 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.716041 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5969b76fdc-qf4wv"] Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.721846 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.739286 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.7392731 podStartE2EDuration="20.7392731s" podCreationTimestamp="2026-01-30 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:16:12.737598614 +0000 UTC m=+277.303902468" watchObservedRunningTime="2026-01-30 10:16:12.7392731 +0000 UTC m=+277.305576924" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.765816 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.984687 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.006178 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.015606 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.017852 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.051928 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.114195 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.160581 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.166683 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.234309 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.293112 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.476391 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.514688 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.533879 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.629407 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.630430 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.719696 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.796450 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.801614 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.821454 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.859091 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.931109 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.947294 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.003138 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.033206 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.057629 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.068888 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.317805 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.324408 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.412159 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.452976 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.462440 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.497700 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.554496 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.640006 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.659010 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.722693 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.788724 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.813763 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.827665 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.837636 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.859816 4984 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.860233 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" gracePeriod=5 Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.867436 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.967813 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.005121 4984 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.044518 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.096319 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.152728 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.234097 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.271380 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.442450 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.504934 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.510290 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.511382 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.626291 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.644116 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.776213 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.808432 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.924998 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.022423 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.049890 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.055384 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.062370 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.104737 4984 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.197784 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.202130 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.295537 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.308744 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.368793 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.490858 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.524174 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.641060 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.943055 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.969025 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.990340 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.090890 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.173294 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.292111 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.345687 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.441076 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.447027 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.447361 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.570353 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 10:16:18 crc kubenswrapper[4984]: I0130 10:16:18.118441 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 10:16:18 crc kubenswrapper[4984]: I0130 10:16:18.583794 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 10:16:18 crc kubenswrapper[4984]: I0130 10:16:18.933972 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 10:16:19 crc kubenswrapper[4984]: I0130 10:16:19.107907 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.451839 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.451912 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.485213 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.485305 4984 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" exitCode=137 Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.485362 4984 scope.go:117] "RemoveContainer" containerID="510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.485470 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.506531 4984 scope.go:117] "RemoveContainer" containerID="510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" Jan 30 10:16:20 crc kubenswrapper[4984]: E0130 10:16:20.507067 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124\": container with ID starting with 510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124 not found: ID does not exist" containerID="510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.507121 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124"} err="failed to get container status \"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124\": rpc error: code = NotFound desc = could not find container \"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124\": container with ID starting with 510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124 not found: ID does not exist" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607040 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607189 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607285 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607321 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607351 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607349 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607411 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607446 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607476 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607858 4984 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607915 4984 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607936 4984 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607954 4984 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.618143 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.708952 4984 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.870667 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 10:16:22 crc kubenswrapper[4984]: I0130 10:16:22.070908 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 10:16:22 crc kubenswrapper[4984]: I0130 10:16:22.097108 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 10:16:34 crc kubenswrapper[4984]: I0130 10:16:34.578074 4984 generic.go:334] "Generic (PLEG): container finished" podID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" exitCode=0 Jan 30 10:16:34 crc kubenswrapper[4984]: I0130 10:16:34.578193 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerDied","Data":"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7"} Jan 30 10:16:34 crc kubenswrapper[4984]: I0130 10:16:34.578907 4984 scope.go:117] "RemoveContainer" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" Jan 30 10:16:35 crc kubenswrapper[4984]: I0130 10:16:35.586915 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerStarted","Data":"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9"} Jan 30 10:16:35 crc kubenswrapper[4984]: I0130 10:16:35.587996 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:16:35 crc kubenswrapper[4984]: I0130 10:16:35.588922 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:16:35 crc kubenswrapper[4984]: I0130 10:16:35.887770 4984 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.593461 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.595296 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" containerID="cri-o://04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" gracePeriod=30 Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.693459 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.694307 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" containerID="cri-o://92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" gracePeriod=30 Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.993300 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.035025 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.078234 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.079086 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079105 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.079119 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079126 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.079134 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079140 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.079153 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" containerName="installer" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079159 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" containerName="installer" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079262 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079275 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079286 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079295 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" containerName="installer" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079648 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.085355 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.086455 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.087614 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.087643 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.087695 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.088684 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.091446 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz" (OuterVolumeSpecName: "kube-api-access-lhskz") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "kube-api-access-lhskz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.092069 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.092291 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca" (OuterVolumeSpecName: "client-ca") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.095655 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config" (OuterVolumeSpecName: "config") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.106635 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.192524 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") pod \"f934f289-4896-49e7-b0ad-12222ed44137\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.192595 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") pod \"f934f289-4896-49e7-b0ad-12222ed44137\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193309 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca" (OuterVolumeSpecName: "client-ca") pod "f934f289-4896-49e7-b0ad-12222ed44137" (UID: "f934f289-4896-49e7-b0ad-12222ed44137"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193371 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") pod \"f934f289-4896-49e7-b0ad-12222ed44137\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193689 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") pod \"f934f289-4896-49e7-b0ad-12222ed44137\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193819 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193895 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193956 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193971 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194167 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194209 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194222 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194237 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194271 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194281 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194743 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config" (OuterVolumeSpecName: "config") pod "f934f289-4896-49e7-b0ad-12222ed44137" (UID: "f934f289-4896-49e7-b0ad-12222ed44137"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.196837 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f934f289-4896-49e7-b0ad-12222ed44137" (UID: "f934f289-4896-49e7-b0ad-12222ed44137"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.197122 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg" (OuterVolumeSpecName: "kube-api-access-t67jg") pod "f934f289-4896-49e7-b0ad-12222ed44137" (UID: "f934f289-4896-49e7-b0ad-12222ed44137"). InnerVolumeSpecName "kube-api-access-t67jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.294878 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.294938 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.294989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.295077 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.295175 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.295197 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.295216 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.296233 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.296344 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.300478 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.326910 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.395929 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.611851 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616036 4984 generic.go:334] "Generic (PLEG): container finished" podID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerID="04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" exitCode=0 Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616102 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616124 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" event={"ID":"f03e3054-ba21-45c6-8cbd-786eb7eac685","Type":"ContainerDied","Data":"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142"} Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616163 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" event={"ID":"f03e3054-ba21-45c6-8cbd-786eb7eac685","Type":"ContainerDied","Data":"ea7973a6b7aeb56d77b3657c44c45b40105b1dffec897b668fde3fd406ab2c03"} Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616192 4984 scope.go:117] "RemoveContainer" containerID="04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.621903 4984 generic.go:334] "Generic (PLEG): container finished" podID="f934f289-4896-49e7-b0ad-12222ed44137" containerID="92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" exitCode=0 Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.621944 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" event={"ID":"f934f289-4896-49e7-b0ad-12222ed44137","Type":"ContainerDied","Data":"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607"} Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.621968 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" event={"ID":"f934f289-4896-49e7-b0ad-12222ed44137","Type":"ContainerDied","Data":"5d2a7595aa7be4a2d24c3db3a03ceede193b8f38eb6567b569e38559c698d2a9"} Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.622020 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.638220 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.643112 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.653239 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.659595 4984 scope.go:117] "RemoveContainer" containerID="04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.660070 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142\": container with ID starting with 04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142 not found: ID does not exist" containerID="04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.660106 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142"} err="failed to get container status \"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142\": rpc error: code = NotFound desc = could not find container \"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142\": container with ID starting with 04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142 not found: ID does not exist" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.660154 4984 scope.go:117] "RemoveContainer" containerID="92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.664347 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.675487 4984 scope.go:117] "RemoveContainer" containerID="92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.676038 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607\": container with ID starting with 92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607 not found: ID does not exist" containerID="92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.676081 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607"} err="failed to get container status \"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607\": rpc error: code = NotFound desc = could not find container \"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607\": container with ID starting with 92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607 not found: ID does not exist" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.419511 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.420338 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.422590 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.423946 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.425403 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.426847 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.427102 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.427420 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.433530 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.440066 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508186 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508582 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508717 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508860 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508965 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.609858 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.609932 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.609980 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.610013 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.610056 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.610978 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.611901 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.612950 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.618938 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.630391 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" event={"ID":"651b92be-48ed-4019-8a48-91138fdcd356","Type":"ContainerStarted","Data":"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c"} Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.630633 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" event={"ID":"651b92be-48ed-4019-8a48-91138fdcd356","Type":"ContainerStarted","Data":"365e334180612a639f5ba661049874fb4f2f877225cb9d8766b3099b7bb63022"} Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.631339 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.640023 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.640416 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.658346 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" podStartSLOduration=1.658325652 podStartE2EDuration="1.658325652s" podCreationTimestamp="2026-01-30 10:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:16:41.650549007 +0000 UTC m=+306.216852891" watchObservedRunningTime="2026-01-30 10:16:41.658325652 +0000 UTC m=+306.224629486" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.747360 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.990028 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.117657 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" path="/var/lib/kubelet/pods/f03e3054-ba21-45c6-8cbd-786eb7eac685/volumes" Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.118539 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f934f289-4896-49e7-b0ad-12222ed44137" path="/var/lib/kubelet/pods/f934f289-4896-49e7-b0ad-12222ed44137/volumes" Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.638627 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" event={"ID":"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4","Type":"ContainerStarted","Data":"8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce"} Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.639836 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.639985 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" event={"ID":"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4","Type":"ContainerStarted","Data":"441e7a7744b46b2ece6c079718d1396613b0f164a34f5b2b2ecf871a33435b67"} Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.643103 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.662394 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" podStartSLOduration=3.662371437 podStartE2EDuration="3.662371437s" podCreationTimestamp="2026-01-30 10:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:16:42.657823902 +0000 UTC m=+307.224127726" watchObservedRunningTime="2026-01-30 10:16:42.662371437 +0000 UTC m=+307.228675271" Jan 30 10:16:59 crc kubenswrapper[4984]: I0130 10:16:59.596858 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:16:59 crc kubenswrapper[4984]: I0130 10:16:59.598333 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerName="controller-manager" containerID="cri-o://8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce" gracePeriod=30 Jan 30 10:16:59 crc kubenswrapper[4984]: I0130 10:16:59.736010 4984 generic.go:334] "Generic (PLEG): container finished" podID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerID="8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce" exitCode=0 Jan 30 10:16:59 crc kubenswrapper[4984]: I0130 10:16:59.736048 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" event={"ID":"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4","Type":"ContainerDied","Data":"8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce"} Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.218464 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.376733 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.377556 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.378697 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.378944 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.379166 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.379906 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.380473 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.380796 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca" (OuterVolumeSpecName: "client-ca") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.381084 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config" (OuterVolumeSpecName: "config") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.383516 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg" (OuterVolumeSpecName: "kube-api-access-cpgdg") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "kube-api-access-cpgdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.384581 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.481848 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.481906 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.481953 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.481977 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.742568 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" event={"ID":"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4","Type":"ContainerDied","Data":"441e7a7744b46b2ece6c079718d1396613b0f164a34f5b2b2ecf871a33435b67"} Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.742596 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.743509 4984 scope.go:117] "RemoveContainer" containerID="8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.780877 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.787934 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.432774 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7696df588c-pl652"] Jan 30 10:17:01 crc kubenswrapper[4984]: E0130 10:17:01.433021 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerName="controller-manager" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.433037 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerName="controller-manager" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.433139 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerName="controller-manager" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.433522 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.436708 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.436904 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.437663 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.437892 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.438030 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.441654 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.448391 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7696df588c-pl652"] Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.451979 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.594784 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-client-ca\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.594847 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb277\" (UniqueName: \"kubernetes.io/projected/3bb375f7-22cc-4552-9c4b-49cb9ced2000-kube-api-access-qb277\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.594889 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb375f7-22cc-4552-9c4b-49cb9ced2000-serving-cert\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.594931 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-config\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.595059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-proxy-ca-bundles\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696128 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-client-ca\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696222 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb277\" (UniqueName: \"kubernetes.io/projected/3bb375f7-22cc-4552-9c4b-49cb9ced2000-kube-api-access-qb277\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696313 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb375f7-22cc-4552-9c4b-49cb9ced2000-serving-cert\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696397 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-config\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696476 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-proxy-ca-bundles\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.697771 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-config\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.697916 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-client-ca\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.698133 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-proxy-ca-bundles\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.700961 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb375f7-22cc-4552-9c4b-49cb9ced2000-serving-cert\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.725926 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb277\" (UniqueName: \"kubernetes.io/projected/3bb375f7-22cc-4552-9c4b-49cb9ced2000-kube-api-access-qb277\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.772887 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.987378 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7696df588c-pl652"] Jan 30 10:17:02 crc kubenswrapper[4984]: W0130 10:17:01.994759 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bb375f7_22cc_4552_9c4b_49cb9ced2000.slice/crio-92c9e96e1230fbf405299979cfe939ba870f85b807467a452974b4ff911dbc6f WatchSource:0}: Error finding container 92c9e96e1230fbf405299979cfe939ba870f85b807467a452974b4ff911dbc6f: Status 404 returned error can't find the container with id 92c9e96e1230fbf405299979cfe939ba870f85b807467a452974b4ff911dbc6f Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.097517 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" path="/var/lib/kubelet/pods/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4/volumes" Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.759432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" event={"ID":"3bb375f7-22cc-4552-9c4b-49cb9ced2000","Type":"ContainerStarted","Data":"70b6f513b6f674fb1913a7bb58c3c0f49b34088d5d5dfbdd67fc4ce8126acfdd"} Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.759511 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" event={"ID":"3bb375f7-22cc-4552-9c4b-49cb9ced2000","Type":"ContainerStarted","Data":"92c9e96e1230fbf405299979cfe939ba870f85b807467a452974b4ff911dbc6f"} Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.759893 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.766135 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.780935 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" podStartSLOduration=3.780905919 podStartE2EDuration="3.780905919s" podCreationTimestamp="2026-01-30 10:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:17:02.779806818 +0000 UTC m=+327.346110642" watchObservedRunningTime="2026-01-30 10:17:02.780905919 +0000 UTC m=+327.347209743" Jan 30 10:17:03 crc kubenswrapper[4984]: I0130 10:17:03.001387 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:17:03 crc kubenswrapper[4984]: I0130 10:17:03.001467 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.948369 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.953297 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w4cgz" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="registry-server" containerID="cri-o://860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" gracePeriod=30 Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.957747 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.958099 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8cnkg" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="registry-server" containerID="cri-o://8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" gracePeriod=30 Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.972508 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.972837 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" containerID="cri-o://626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" gracePeriod=30 Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.986391 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.986660 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9vv7r" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="registry-server" containerID="cri-o://acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" gracePeriod=30 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.001967 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.002312 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dc27n" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="registry-server" containerID="cri-o://cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" gracePeriod=30 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.005229 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tttcx"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.006194 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.008177 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tttcx"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.123122 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.123224 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-874f4\" (UniqueName: \"kubernetes.io/projected/ed0e4098-37d9-4094-99d0-1892881696ad-kube-api-access-874f4\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.123263 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.223911 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.224007 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-874f4\" (UniqueName: \"kubernetes.io/projected/ed0e4098-37d9-4094-99d0-1892881696ad-kube-api-access-874f4\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.224037 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.225378 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.232598 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.240889 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-874f4\" (UniqueName: \"kubernetes.io/projected/ed0e4098-37d9-4094-99d0-1892881696ad-kube-api-access-874f4\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.402896 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.473238 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.633416 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.633614 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") pod \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.633659 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.634488 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b92a67bb-8407-4e47-9d9a-9d15398d90ed" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.637881 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5" (OuterVolumeSpecName: "kube-api-access-hssw5") pod "b92a67bb-8407-4e47-9d9a-9d15398d90ed" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed"). InnerVolumeSpecName "kube-api-access-hssw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.638061 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b92a67bb-8407-4e47-9d9a-9d15398d90ed" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.654358 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.664550 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.680893 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.685891 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.734310 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") pod \"94ba287c-b444-471f-8be9-e1c553ee251e\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.734499 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") pod \"94ba287c-b444-471f-8be9-e1c553ee251e\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.734523 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") pod \"94ba287c-b444-471f-8be9-e1c553ee251e\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.735120 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.735158 4984 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.735169 4984 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.735451 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities" (OuterVolumeSpecName: "utilities") pod "94ba287c-b444-471f-8be9-e1c553ee251e" (UID: "94ba287c-b444-471f-8be9-e1c553ee251e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.736899 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87" (OuterVolumeSpecName: "kube-api-access-hxf87") pod "94ba287c-b444-471f-8be9-e1c553ee251e" (UID: "94ba287c-b444-471f-8be9-e1c553ee251e"). InnerVolumeSpecName "kube-api-access-hxf87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810289 4984 generic.go:334] "Generic (PLEG): container finished" podID="b628557d-490d-4803-8ae3-fde88678c6a4" containerID="860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810362 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810393 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerDied","Data":"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810441 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerDied","Data":"d624716dec815a31dc6fb1b18652f2e1a4591d64f410b4c644c4fb229fcd424e"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810467 4984 scope.go:117] "RemoveContainer" containerID="860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.815698 4984 generic.go:334] "Generic (PLEG): container finished" podID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerID="626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.815867 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.816817 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerDied","Data":"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.816847 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerDied","Data":"d50bbcffbf98d16fce57cd7c81f40638192b3cecf76451eac0e5109332dde5b2"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.818851 4984 generic.go:334] "Generic (PLEG): container finished" podID="94ba287c-b444-471f-8be9-e1c553ee251e" containerID="cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.818921 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerDied","Data":"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.818948 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerDied","Data":"1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.818924 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.829232 4984 generic.go:334] "Generic (PLEG): container finished" podID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerID="8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.829332 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerDied","Data":"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.829401 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerDied","Data":"ff00561d64a6b687fd04a281bc0b10957180facce6b051e1ab6f63d8c0e3e399"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.829399 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.832399 4984 generic.go:334] "Generic (PLEG): container finished" podID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerID="acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.832436 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerDied","Data":"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.832445 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.832456 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerDied","Data":"4aa1ead20f9be6ec24d5528456caa32578bf134deec9e2dc8d7d858e101255c0"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836366 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") pod \"b628557d-490d-4803-8ae3-fde88678c6a4\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836410 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") pod \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836429 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") pod \"b628557d-490d-4803-8ae3-fde88678c6a4\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836516 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") pod \"b628557d-490d-4803-8ae3-fde88678c6a4\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836541 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") pod \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836565 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") pod \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836581 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") pod \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836607 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") pod \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836631 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") pod \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836823 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836835 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.837563 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities" (OuterVolumeSpecName: "utilities") pod "44e02fc4-8da4-4122-bd3a-9b8f9734ec59" (UID: "44e02fc4-8da4-4122-bd3a-9b8f9734ec59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.838498 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities" (OuterVolumeSpecName: "utilities") pod "b628557d-490d-4803-8ae3-fde88678c6a4" (UID: "b628557d-490d-4803-8ae3-fde88678c6a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.840658 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn" (OuterVolumeSpecName: "kube-api-access-q7xqn") pod "4aab6e83-8a77-45ad-aa28-fe2c519133fb" (UID: "4aab6e83-8a77-45ad-aa28-fe2c519133fb"). InnerVolumeSpecName "kube-api-access-q7xqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.840778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw" (OuterVolumeSpecName: "kube-api-access-lgrgw") pod "b628557d-490d-4803-8ae3-fde88678c6a4" (UID: "b628557d-490d-4803-8ae3-fde88678c6a4"). InnerVolumeSpecName "kube-api-access-lgrgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.845035 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k" (OuterVolumeSpecName: "kube-api-access-9hs7k") pod "44e02fc4-8da4-4122-bd3a-9b8f9734ec59" (UID: "44e02fc4-8da4-4122-bd3a-9b8f9734ec59"). InnerVolumeSpecName "kube-api-access-9hs7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.848330 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities" (OuterVolumeSpecName: "utilities") pod "4aab6e83-8a77-45ad-aa28-fe2c519133fb" (UID: "4aab6e83-8a77-45ad-aa28-fe2c519133fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.848545 4984 scope.go:117] "RemoveContainer" containerID="501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.864361 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.868631 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.881593 4984 scope.go:117] "RemoveContainer" containerID="f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.883470 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44e02fc4-8da4-4122-bd3a-9b8f9734ec59" (UID: "44e02fc4-8da4-4122-bd3a-9b8f9734ec59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.894637 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b628557d-490d-4803-8ae3-fde88678c6a4" (UID: "b628557d-490d-4803-8ae3-fde88678c6a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.895179 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94ba287c-b444-471f-8be9-e1c553ee251e" (UID: "94ba287c-b444-471f-8be9-e1c553ee251e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.896593 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4aab6e83-8a77-45ad-aa28-fe2c519133fb" (UID: "4aab6e83-8a77-45ad-aa28-fe2c519133fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897026 4984 scope.go:117] "RemoveContainer" containerID="860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.897402 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc\": container with ID starting with 860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc not found: ID does not exist" containerID="860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897430 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc"} err="failed to get container status \"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc\": rpc error: code = NotFound desc = could not find container \"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc\": container with ID starting with 860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897450 4984 scope.go:117] "RemoveContainer" containerID="501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.897752 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6\": container with ID starting with 501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6 not found: ID does not exist" containerID="501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897774 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6"} err="failed to get container status \"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6\": rpc error: code = NotFound desc = could not find container \"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6\": container with ID starting with 501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6 not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897785 4984 scope.go:117] "RemoveContainer" containerID="f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.898042 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8\": container with ID starting with f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8 not found: ID does not exist" containerID="f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.898060 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8"} err="failed to get container status \"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8\": rpc error: code = NotFound desc = could not find container \"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8\": container with ID starting with f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8 not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.898072 4984 scope.go:117] "RemoveContainer" containerID="626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.909534 4984 scope.go:117] "RemoveContainer" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.920223 4984 scope.go:117] "RemoveContainer" containerID="626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.920657 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9\": container with ID starting with 626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9 not found: ID does not exist" containerID="626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.920697 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9"} err="failed to get container status \"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9\": rpc error: code = NotFound desc = could not find container \"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9\": container with ID starting with 626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9 not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.920723 4984 scope.go:117] "RemoveContainer" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.920972 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7\": container with ID starting with a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7 not found: ID does not exist" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.920999 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7"} err="failed to get container status \"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7\": rpc error: code = NotFound desc = could not find container \"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7\": container with ID starting with a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7 not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.921022 4984 scope.go:117] "RemoveContainer" containerID="cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.935688 4984 scope.go:117] "RemoveContainer" containerID="d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938377 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938494 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938599 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938697 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938798 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938897 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938992 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.939093 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.939195 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.939317 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.954395 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tttcx"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.962882 4984 scope.go:117] "RemoveContainer" containerID="ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193" Jan 30 10:17:10 crc kubenswrapper[4984]: W0130 10:17:10.968095 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded0e4098_37d9_4094_99d0_1892881696ad.slice/crio-f5525d8f51450c2eefc5cc257a387ae7ab1e0412fb838d1e191c80b2a9839bf3 WatchSource:0}: Error finding container f5525d8f51450c2eefc5cc257a387ae7ab1e0412fb838d1e191c80b2a9839bf3: Status 404 returned error can't find the container with id f5525d8f51450c2eefc5cc257a387ae7ab1e0412fb838d1e191c80b2a9839bf3 Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.001683 4984 scope.go:117] "RemoveContainer" containerID="cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.002237 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d\": container with ID starting with cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d not found: ID does not exist" containerID="cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.002348 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d"} err="failed to get container status \"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d\": rpc error: code = NotFound desc = could not find container \"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d\": container with ID starting with cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.002427 4984 scope.go:117] "RemoveContainer" containerID="d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.002834 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5\": container with ID starting with d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5 not found: ID does not exist" containerID="d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.002942 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5"} err="failed to get container status \"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5\": rpc error: code = NotFound desc = could not find container \"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5\": container with ID starting with d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.003039 4984 scope.go:117] "RemoveContainer" containerID="ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.003480 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193\": container with ID starting with ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193 not found: ID does not exist" containerID="ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.003618 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193"} err="failed to get container status \"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193\": rpc error: code = NotFound desc = could not find container \"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193\": container with ID starting with ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.003724 4984 scope.go:117] "RemoveContainer" containerID="8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.020960 4984 scope.go:117] "RemoveContainer" containerID="e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.044790 4984 scope.go:117] "RemoveContainer" containerID="1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.068115 4984 scope.go:117] "RemoveContainer" containerID="8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.069060 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb\": container with ID starting with 8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb not found: ID does not exist" containerID="8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.069104 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb"} err="failed to get container status \"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb\": rpc error: code = NotFound desc = could not find container \"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb\": container with ID starting with 8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.069129 4984 scope.go:117] "RemoveContainer" containerID="e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.069542 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74\": container with ID starting with e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74 not found: ID does not exist" containerID="e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.069563 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74"} err="failed to get container status \"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74\": rpc error: code = NotFound desc = could not find container \"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74\": container with ID starting with e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.069575 4984 scope.go:117] "RemoveContainer" containerID="1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.069954 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba\": container with ID starting with 1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba not found: ID does not exist" containerID="1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.070076 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba"} err="failed to get container status \"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba\": rpc error: code = NotFound desc = could not find container \"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba\": container with ID starting with 1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.070179 4984 scope.go:117] "RemoveContainer" containerID="acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.092033 4984 scope.go:117] "RemoveContainer" containerID="4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.161522 4984 scope.go:117] "RemoveContainer" containerID="2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.196328 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.199500 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.208682 4984 scope.go:117] "RemoveContainer" containerID="acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.210607 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3\": container with ID starting with acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3 not found: ID does not exist" containerID="acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.210688 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3"} err="failed to get container status \"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3\": rpc error: code = NotFound desc = could not find container \"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3\": container with ID starting with acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.210734 4984 scope.go:117] "RemoveContainer" containerID="4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.211768 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8\": container with ID starting with 4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8 not found: ID does not exist" containerID="4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.211801 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8"} err="failed to get container status \"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8\": rpc error: code = NotFound desc = could not find container \"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8\": container with ID starting with 4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.211825 4984 scope.go:117] "RemoveContainer" containerID="2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.212574 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f\": container with ID starting with 2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f not found: ID does not exist" containerID="2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.212596 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f"} err="failed to get container status \"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f\": rpc error: code = NotFound desc = could not find container \"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f\": container with ID starting with 2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.216562 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.222296 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.232508 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.235288 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.247342 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.251243 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690673 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8prhf"] Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690889 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690905 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690919 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690927 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690939 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690948 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690958 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690967 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690979 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690988 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690998 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691007 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691019 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691027 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691041 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691048 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691058 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691067 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691079 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691087 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691097 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691106 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691119 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691127 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691140 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691150 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691165 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691176 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691306 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691322 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691336 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691348 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691361 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691374 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.692240 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.694496 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.704546 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8prhf"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.840553 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" event={"ID":"ed0e4098-37d9-4094-99d0-1892881696ad","Type":"ContainerStarted","Data":"e66f965fe7aae5cc6c0005cae866c92a7668a150f013c269281c0c6e2318b7d1"} Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.840623 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" event={"ID":"ed0e4098-37d9-4094-99d0-1892881696ad","Type":"ContainerStarted","Data":"f5525d8f51450c2eefc5cc257a387ae7ab1e0412fb838d1e191c80b2a9839bf3"} Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.840871 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.845858 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.851465 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-utilities\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.851511 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxntp\" (UniqueName: \"kubernetes.io/projected/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-kube-api-access-rxntp\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.851711 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-catalog-content\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.859899 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" podStartSLOduration=2.859875179 podStartE2EDuration="2.859875179s" podCreationTimestamp="2026-01-30 10:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:17:11.856716731 +0000 UTC m=+336.423020555" watchObservedRunningTime="2026-01-30 10:17:11.859875179 +0000 UTC m=+336.426179003" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953233 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-utilities\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953295 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxntp\" (UniqueName: \"kubernetes.io/projected/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-kube-api-access-rxntp\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953349 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-catalog-content\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953929 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-catalog-content\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953960 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-utilities\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.975326 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxntp\" (UniqueName: \"kubernetes.io/projected/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-kube-api-access-rxntp\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.021122 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.099681 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" path="/var/lib/kubelet/pods/44e02fc4-8da4-4122-bd3a-9b8f9734ec59/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.101005 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" path="/var/lib/kubelet/pods/4aab6e83-8a77-45ad-aa28-fe2c519133fb/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.102280 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" path="/var/lib/kubelet/pods/94ba287c-b444-471f-8be9-e1c553ee251e/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.103715 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" path="/var/lib/kubelet/pods/b628557d-490d-4803-8ae3-fde88678c6a4/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.104581 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" path="/var/lib/kubelet/pods/b92a67bb-8407-4e47-9d9a-9d15398d90ed/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.437446 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8prhf"] Jan 30 10:17:12 crc kubenswrapper[4984]: W0130 10:17:12.445357 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod719f7e0f_9e74_40fe_b2cb_a967e9e0ac4d.slice/crio-7442ca737a4f056be85af907ce3534de794f11b0d92c56bf793e6bc963b45360 WatchSource:0}: Error finding container 7442ca737a4f056be85af907ce3534de794f11b0d92c56bf793e6bc963b45360: Status 404 returned error can't find the container with id 7442ca737a4f056be85af907ce3534de794f11b0d92c56bf793e6bc963b45360 Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.849789 4984 generic.go:334] "Generic (PLEG): container finished" podID="719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d" containerID="806b86815f0eb53cbea203f9a2da1723e2a4b34380c9188a2755ec9e1452e070" exitCode=0 Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.849929 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8prhf" event={"ID":"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d","Type":"ContainerDied","Data":"806b86815f0eb53cbea203f9a2da1723e2a4b34380c9188a2755ec9e1452e070"} Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.850402 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8prhf" event={"ID":"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d","Type":"ContainerStarted","Data":"7442ca737a4f056be85af907ce3534de794f11b0d92c56bf793e6bc963b45360"} Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.093624 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-47j92"] Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.095203 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.097864 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.113887 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47j92"] Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.167848 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnn82\" (UniqueName: \"kubernetes.io/projected/24af9dab-3f7a-4433-b367-5ecafcf89754-kube-api-access-rnn82\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.167921 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-utilities\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.167948 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-catalog-content\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.269211 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-utilities\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.269301 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-catalog-content\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.269372 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnn82\" (UniqueName: \"kubernetes.io/projected/24af9dab-3f7a-4433-b367-5ecafcf89754-kube-api-access-rnn82\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.270041 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-utilities\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.270056 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-catalog-content\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.300249 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnn82\" (UniqueName: \"kubernetes.io/projected/24af9dab-3f7a-4433-b367-5ecafcf89754-kube-api-access-rnn82\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.414580 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.815234 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47j92"] Jan 30 10:17:13 crc kubenswrapper[4984]: W0130 10:17:13.830423 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24af9dab_3f7a_4433_b367_5ecafcf89754.slice/crio-499248b9cc13117b6e9f73ef395d83e9a817dbf13bfa0401aa60386821d5a80f WatchSource:0}: Error finding container 499248b9cc13117b6e9f73ef395d83e9a817dbf13bfa0401aa60386821d5a80f: Status 404 returned error can't find the container with id 499248b9cc13117b6e9f73ef395d83e9a817dbf13bfa0401aa60386821d5a80f Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.857688 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerStarted","Data":"499248b9cc13117b6e9f73ef395d83e9a817dbf13bfa0401aa60386821d5a80f"} Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.859389 4984 generic.go:334] "Generic (PLEG): container finished" podID="719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d" containerID="dfede5255ad7fa3967d73e48ff3b4bf1c4baf42f585ac700b13bbcdf6a442422" exitCode=0 Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.860368 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8prhf" event={"ID":"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d","Type":"ContainerDied","Data":"dfede5255ad7fa3967d73e48ff3b4bf1c4baf42f585ac700b13bbcdf6a442422"} Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.100357 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zckjp"] Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.102571 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.107816 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.130942 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zckjp"] Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.181337 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-utilities\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.181380 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xb5f\" (UniqueName: \"kubernetes.io/projected/c47b45ee-75cf-4e33-bfde-721099cda0a9-kube-api-access-6xb5f\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.181409 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-catalog-content\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283141 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-utilities\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283190 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xb5f\" (UniqueName: \"kubernetes.io/projected/c47b45ee-75cf-4e33-bfde-721099cda0a9-kube-api-access-6xb5f\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283222 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-catalog-content\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283667 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-catalog-content\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283905 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-utilities\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.307331 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xb5f\" (UniqueName: \"kubernetes.io/projected/c47b45ee-75cf-4e33-bfde-721099cda0a9-kube-api-access-6xb5f\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.430857 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.842682 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zckjp"] Jan 30 10:17:14 crc kubenswrapper[4984]: W0130 10:17:14.847491 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc47b45ee_75cf_4e33_bfde_721099cda0a9.slice/crio-ca8a4f06ef762614c07c1e67e3cf6bdd10f9df91ff712dced7da9117bbbfe0c4 WatchSource:0}: Error finding container ca8a4f06ef762614c07c1e67e3cf6bdd10f9df91ff712dced7da9117bbbfe0c4: Status 404 returned error can't find the container with id ca8a4f06ef762614c07c1e67e3cf6bdd10f9df91ff712dced7da9117bbbfe0c4 Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.866673 4984 generic.go:334] "Generic (PLEG): container finished" podID="24af9dab-3f7a-4433-b367-5ecafcf89754" containerID="47ecda0029228306a7b8a47d8f098ce8c53744ce863e056fdfde483e9dd11ca1" exitCode=0 Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.866754 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerDied","Data":"47ecda0029228306a7b8a47d8f098ce8c53744ce863e056fdfde483e9dd11ca1"} Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.870291 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8prhf" event={"ID":"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d","Type":"ContainerStarted","Data":"b2a0860734fd0e7d060f07123b107886535e6a72ced4d4cfb58cff4b8639eb7d"} Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.873357 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerStarted","Data":"ca8a4f06ef762614c07c1e67e3cf6bdd10f9df91ff712dced7da9117bbbfe0c4"} Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.899477 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8prhf" podStartSLOduration=2.452469697 podStartE2EDuration="3.899462935s" podCreationTimestamp="2026-01-30 10:17:11 +0000 UTC" firstStartedPulling="2026-01-30 10:17:12.851345019 +0000 UTC m=+337.417648843" lastFinishedPulling="2026-01-30 10:17:14.298338257 +0000 UTC m=+338.864642081" observedRunningTime="2026-01-30 10:17:14.899455405 +0000 UTC m=+339.465759239" watchObservedRunningTime="2026-01-30 10:17:14.899462935 +0000 UTC m=+339.465766759" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.488058 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hn9gx"] Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.489149 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.494040 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.496119 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn9gx"] Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.600187 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78s52\" (UniqueName: \"kubernetes.io/projected/a725adac-ef1c-400b-bde2-756c97779906-kube-api-access-78s52\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.600320 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-catalog-content\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.600348 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-utilities\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.701647 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78s52\" (UniqueName: \"kubernetes.io/projected/a725adac-ef1c-400b-bde2-756c97779906-kube-api-access-78s52\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.701729 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-catalog-content\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.701747 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-utilities\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.702356 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-utilities\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.702601 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-catalog-content\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.721186 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78s52\" (UniqueName: \"kubernetes.io/projected/a725adac-ef1c-400b-bde2-756c97779906-kube-api-access-78s52\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.854292 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.881370 4984 generic.go:334] "Generic (PLEG): container finished" podID="c47b45ee-75cf-4e33-bfde-721099cda0a9" containerID="487d175dd5ae0ff0acad6068ead72fb6c51a679c5979ee525cdc7e53c255532c" exitCode=0 Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.881557 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerDied","Data":"487d175dd5ae0ff0acad6068ead72fb6c51a679c5979ee525cdc7e53c255532c"} Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.883738 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerStarted","Data":"90bcea06094aee3cc94b04024be7e2ce3242a3ee4d36474028403aee215e8d9a"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.301022 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn9gx"] Jan 30 10:17:16 crc kubenswrapper[4984]: W0130 10:17:16.309414 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda725adac_ef1c_400b_bde2_756c97779906.slice/crio-ecd172593b8137cc58b6a038819decbdc7ee0a5cf958c2f0bc8b2a544c4bad8a WatchSource:0}: Error finding container ecd172593b8137cc58b6a038819decbdc7ee0a5cf958c2f0bc8b2a544c4bad8a: Status 404 returned error can't find the container with id ecd172593b8137cc58b6a038819decbdc7ee0a5cf958c2f0bc8b2a544c4bad8a Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.888587 4984 generic.go:334] "Generic (PLEG): container finished" podID="a725adac-ef1c-400b-bde2-756c97779906" containerID="1044e6b90beb26c2b6c1eeb33ee89ad1764ec5bb79add932e4ee2ae5a8ed8506" exitCode=0 Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.888683 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerDied","Data":"1044e6b90beb26c2b6c1eeb33ee89ad1764ec5bb79add932e4ee2ae5a8ed8506"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.889082 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerStarted","Data":"ecd172593b8137cc58b6a038819decbdc7ee0a5cf958c2f0bc8b2a544c4bad8a"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.893149 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerStarted","Data":"defacf66931fe0e949441d23986abe04d51219516ee62d967bbdd01e1910c8d2"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.913243 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerDied","Data":"90bcea06094aee3cc94b04024be7e2ce3242a3ee4d36474028403aee215e8d9a"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.913366 4984 generic.go:334] "Generic (PLEG): container finished" podID="24af9dab-3f7a-4433-b367-5ecafcf89754" containerID="90bcea06094aee3cc94b04024be7e2ce3242a3ee4d36474028403aee215e8d9a" exitCode=0 Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.921162 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerStarted","Data":"5d90c6fc96816176aebf6676207c28fc3c673322d5e57773cfa1c3e61e12fea8"} Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.924866 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerStarted","Data":"ff76d96754dd113685c92e440ae8e38ee1db12b4a22bc41eb4bc0e0619a70c6f"} Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.935627 4984 generic.go:334] "Generic (PLEG): container finished" podID="c47b45ee-75cf-4e33-bfde-721099cda0a9" containerID="defacf66931fe0e949441d23986abe04d51219516ee62d967bbdd01e1910c8d2" exitCode=0 Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.935693 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerDied","Data":"defacf66931fe0e949441d23986abe04d51219516ee62d967bbdd01e1910c8d2"} Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.952209 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-47j92" podStartSLOduration=2.469165467 podStartE2EDuration="4.952192867s" podCreationTimestamp="2026-01-30 10:17:13 +0000 UTC" firstStartedPulling="2026-01-30 10:17:14.86843528 +0000 UTC m=+339.434739104" lastFinishedPulling="2026-01-30 10:17:17.35146265 +0000 UTC m=+341.917766504" observedRunningTime="2026-01-30 10:17:17.946690274 +0000 UTC m=+342.512994118" watchObservedRunningTime="2026-01-30 10:17:17.952192867 +0000 UTC m=+342.518496691" Jan 30 10:17:18 crc kubenswrapper[4984]: I0130 10:17:18.947880 4984 generic.go:334] "Generic (PLEG): container finished" podID="a725adac-ef1c-400b-bde2-756c97779906" containerID="ff76d96754dd113685c92e440ae8e38ee1db12b4a22bc41eb4bc0e0619a70c6f" exitCode=0 Jan 30 10:17:18 crc kubenswrapper[4984]: I0130 10:17:18.947967 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerDied","Data":"ff76d96754dd113685c92e440ae8e38ee1db12b4a22bc41eb4bc0e0619a70c6f"} Jan 30 10:17:18 crc kubenswrapper[4984]: I0130 10:17:18.953349 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerStarted","Data":"8c70e413c823685c1d3d70a41e1044476c0de319360ad0ad5db6742c34076ae0"} Jan 30 10:17:18 crc kubenswrapper[4984]: I0130 10:17:18.994739 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zckjp" podStartSLOduration=2.516890264 podStartE2EDuration="4.99471038s" podCreationTimestamp="2026-01-30 10:17:14 +0000 UTC" firstStartedPulling="2026-01-30 10:17:15.883081735 +0000 UTC m=+340.449385559" lastFinishedPulling="2026-01-30 10:17:18.360901851 +0000 UTC m=+342.927205675" observedRunningTime="2026-01-30 10:17:18.989632659 +0000 UTC m=+343.555936493" watchObservedRunningTime="2026-01-30 10:17:18.99471038 +0000 UTC m=+343.561014214" Jan 30 10:17:19 crc kubenswrapper[4984]: I0130 10:17:19.960583 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerStarted","Data":"65e5e09de7b9d1abf6b633becaa052b28ee393c234c2fe1eba6273ee4044e068"} Jan 30 10:17:22 crc kubenswrapper[4984]: I0130 10:17:22.022190 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:22 crc kubenswrapper[4984]: I0130 10:17:22.022950 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:22 crc kubenswrapper[4984]: I0130 10:17:22.074328 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:22 crc kubenswrapper[4984]: I0130 10:17:22.099988 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hn9gx" podStartSLOduration=4.61704485 podStartE2EDuration="7.099971657s" podCreationTimestamp="2026-01-30 10:17:15 +0000 UTC" firstStartedPulling="2026-01-30 10:17:16.890613834 +0000 UTC m=+341.456917658" lastFinishedPulling="2026-01-30 10:17:19.373540641 +0000 UTC m=+343.939844465" observedRunningTime="2026-01-30 10:17:19.983921757 +0000 UTC m=+344.550225581" watchObservedRunningTime="2026-01-30 10:17:22.099971657 +0000 UTC m=+346.666275481" Jan 30 10:17:23 crc kubenswrapper[4984]: I0130 10:17:23.012177 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:23 crc kubenswrapper[4984]: I0130 10:17:23.415162 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:23 crc kubenswrapper[4984]: I0130 10:17:23.415569 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:24 crc kubenswrapper[4984]: I0130 10:17:24.431969 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:24 crc kubenswrapper[4984]: I0130 10:17:24.432054 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:24 crc kubenswrapper[4984]: I0130 10:17:24.469971 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-47j92" podUID="24af9dab-3f7a-4433-b367-5ecafcf89754" containerName="registry-server" probeResult="failure" output=< Jan 30 10:17:24 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:17:24 crc kubenswrapper[4984]: > Jan 30 10:17:24 crc kubenswrapper[4984]: I0130 10:17:24.483633 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:25 crc kubenswrapper[4984]: I0130 10:17:25.027607 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:25 crc kubenswrapper[4984]: I0130 10:17:25.855338 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:25 crc kubenswrapper[4984]: I0130 10:17:25.855852 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:25 crc kubenswrapper[4984]: I0130 10:17:25.899867 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:26 crc kubenswrapper[4984]: I0130 10:17:26.032966 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.799667 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nk8tk"] Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.800754 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.830052 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nk8tk"] Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.894724 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.894774 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7lm6\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-kube-api-access-v7lm6\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.894820 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-certificates\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.894849 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3dc9055-604a-4c4d-b57e-de76e82bcc80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.895024 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-trusted-ca\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.895114 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3dc9055-604a-4c4d-b57e-de76e82bcc80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.895141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-bound-sa-token\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.895183 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-tls\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.915389 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.996932 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-certificates\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.996994 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3dc9055-604a-4c4d-b57e-de76e82bcc80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997044 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-trusted-ca\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997086 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3dc9055-604a-4c4d-b57e-de76e82bcc80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997109 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-bound-sa-token\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997136 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-tls\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997572 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7lm6\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-kube-api-access-v7lm6\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997646 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3dc9055-604a-4c4d-b57e-de76e82bcc80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.998437 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-certificates\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.999118 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-trusted-ca\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.005082 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-tls\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.008910 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3dc9055-604a-4c4d-b57e-de76e82bcc80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.021340 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-bound-sa-token\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.022393 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7lm6\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-kube-api-access-v7lm6\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.115236 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.545474 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nk8tk"] Jan 30 10:17:31 crc kubenswrapper[4984]: W0130 10:17:31.551416 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3dc9055_604a_4c4d_b57e_de76e82bcc80.slice/crio-fc6d4ea3526015fac5834fe5e3867e9a8b62cfb24d53f0471ec8f249898d6c73 WatchSource:0}: Error finding container fc6d4ea3526015fac5834fe5e3867e9a8b62cfb24d53f0471ec8f249898d6c73: Status 404 returned error can't find the container with id fc6d4ea3526015fac5834fe5e3867e9a8b62cfb24d53f0471ec8f249898d6c73 Jan 30 10:17:32 crc kubenswrapper[4984]: I0130 10:17:32.026782 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" event={"ID":"b3dc9055-604a-4c4d-b57e-de76e82bcc80","Type":"ContainerStarted","Data":"ae9781c5015bfacf2d93b2701ee339ff0f4c375c49cc5ca8487ffe64f37ba9e0"} Jan 30 10:17:32 crc kubenswrapper[4984]: I0130 10:17:32.026824 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" event={"ID":"b3dc9055-604a-4c4d-b57e-de76e82bcc80","Type":"ContainerStarted","Data":"fc6d4ea3526015fac5834fe5e3867e9a8b62cfb24d53f0471ec8f249898d6c73"} Jan 30 10:17:32 crc kubenswrapper[4984]: I0130 10:17:32.027809 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:32 crc kubenswrapper[4984]: I0130 10:17:32.044938 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" podStartSLOduration=2.044925308 podStartE2EDuration="2.044925308s" podCreationTimestamp="2026-01-30 10:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:17:32.041603165 +0000 UTC m=+356.607906989" watchObservedRunningTime="2026-01-30 10:17:32.044925308 +0000 UTC m=+356.611229122" Jan 30 10:17:33 crc kubenswrapper[4984]: I0130 10:17:33.000456 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:17:33 crc kubenswrapper[4984]: I0130 10:17:33.000827 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:17:33 crc kubenswrapper[4984]: I0130 10:17:33.453016 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:33 crc kubenswrapper[4984]: I0130 10:17:33.492389 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:39 crc kubenswrapper[4984]: I0130 10:17:39.590440 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:17:39 crc kubenswrapper[4984]: I0130 10:17:39.591305 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" podUID="651b92be-48ed-4019-8a48-91138fdcd356" containerName="route-controller-manager" containerID="cri-o://d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" gracePeriod=30 Jan 30 10:17:39 crc kubenswrapper[4984]: I0130 10:17:39.948125 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064157 4984 generic.go:334] "Generic (PLEG): container finished" podID="651b92be-48ed-4019-8a48-91138fdcd356" containerID="d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" exitCode=0 Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064226 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064215 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" event={"ID":"651b92be-48ed-4019-8a48-91138fdcd356","Type":"ContainerDied","Data":"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c"} Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064552 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" event={"ID":"651b92be-48ed-4019-8a48-91138fdcd356","Type":"ContainerDied","Data":"365e334180612a639f5ba661049874fb4f2f877225cb9d8766b3099b7bb63022"} Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064577 4984 scope.go:117] "RemoveContainer" containerID="d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.084934 4984 scope.go:117] "RemoveContainer" containerID="d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" Jan 30 10:17:40 crc kubenswrapper[4984]: E0130 10:17:40.085406 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c\": container with ID starting with d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c not found: ID does not exist" containerID="d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.085439 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c"} err="failed to get container status \"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c\": rpc error: code = NotFound desc = could not find container \"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c\": container with ID starting with d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c not found: ID does not exist" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.116964 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") pod \"651b92be-48ed-4019-8a48-91138fdcd356\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.117081 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") pod \"651b92be-48ed-4019-8a48-91138fdcd356\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.117156 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") pod \"651b92be-48ed-4019-8a48-91138fdcd356\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.117190 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") pod \"651b92be-48ed-4019-8a48-91138fdcd356\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.118224 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca" (OuterVolumeSpecName: "client-ca") pod "651b92be-48ed-4019-8a48-91138fdcd356" (UID: "651b92be-48ed-4019-8a48-91138fdcd356"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.119368 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config" (OuterVolumeSpecName: "config") pod "651b92be-48ed-4019-8a48-91138fdcd356" (UID: "651b92be-48ed-4019-8a48-91138fdcd356"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.119525 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.119559 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.123956 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "651b92be-48ed-4019-8a48-91138fdcd356" (UID: "651b92be-48ed-4019-8a48-91138fdcd356"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.123986 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w" (OuterVolumeSpecName: "kube-api-access-hws8w") pod "651b92be-48ed-4019-8a48-91138fdcd356" (UID: "651b92be-48ed-4019-8a48-91138fdcd356"). InnerVolumeSpecName "kube-api-access-hws8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.221416 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.221454 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.395960 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.401220 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.461477 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8694669774-42krh"] Jan 30 10:17:41 crc kubenswrapper[4984]: E0130 10:17:41.462467 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651b92be-48ed-4019-8a48-91138fdcd356" containerName="route-controller-manager" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.462500 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="651b92be-48ed-4019-8a48-91138fdcd356" containerName="route-controller-manager" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.462593 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="651b92be-48ed-4019-8a48-91138fdcd356" containerName="route-controller-manager" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.463008 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.464940 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.464989 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.465762 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.466016 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.466108 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.466187 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.475739 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8694669774-42krh"] Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.637100 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-config\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.637147 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5n8l\" (UniqueName: \"kubernetes.io/projected/4d14038f-705c-4e89-8fb3-fee372da5d38-kube-api-access-q5n8l\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.637533 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-client-ca\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.637618 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d14038f-705c-4e89-8fb3-fee372da5d38-serving-cert\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.738818 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-client-ca\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.738897 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d14038f-705c-4e89-8fb3-fee372da5d38-serving-cert\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.738957 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-config\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.739005 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5n8l\" (UniqueName: \"kubernetes.io/projected/4d14038f-705c-4e89-8fb3-fee372da5d38-kube-api-access-q5n8l\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.740291 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-config\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.740520 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-client-ca\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.744539 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d14038f-705c-4e89-8fb3-fee372da5d38-serving-cert\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.760575 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5n8l\" (UniqueName: \"kubernetes.io/projected/4d14038f-705c-4e89-8fb3-fee372da5d38-kube-api-access-q5n8l\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.818964 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:42 crc kubenswrapper[4984]: I0130 10:17:42.097940 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651b92be-48ed-4019-8a48-91138fdcd356" path="/var/lib/kubelet/pods/651b92be-48ed-4019-8a48-91138fdcd356/volumes" Jan 30 10:17:42 crc kubenswrapper[4984]: I0130 10:17:42.219417 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8694669774-42krh"] Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.083333 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" event={"ID":"4d14038f-705c-4e89-8fb3-fee372da5d38","Type":"ContainerStarted","Data":"4e55769d87baf973f2b758cc1a2904d9f88fc25f680db481ff582d45fa1bddf3"} Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.083389 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" event={"ID":"4d14038f-705c-4e89-8fb3-fee372da5d38","Type":"ContainerStarted","Data":"36037868e40bcec16ed23d16e1cd857c89473c25ee8745ba9e347d5db46f52ed"} Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.085227 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.091143 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.108791 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" podStartSLOduration=4.108774891 podStartE2EDuration="4.108774891s" podCreationTimestamp="2026-01-30 10:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:17:43.105113269 +0000 UTC m=+367.671417103" watchObservedRunningTime="2026-01-30 10:17:43.108774891 +0000 UTC m=+367.675078715" Jan 30 10:17:51 crc kubenswrapper[4984]: I0130 10:17:51.122948 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:51 crc kubenswrapper[4984]: I0130 10:17:51.209195 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.001374 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.001816 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.001959 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.002838 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.003233 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761" gracePeriod=600 Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.223349 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761" exitCode=0 Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.223417 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761"} Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.223467 4984 scope.go:117] "RemoveContainer" containerID="e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e" Jan 30 10:18:04 crc kubenswrapper[4984]: I0130 10:18:04.232296 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4"} Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.277574 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" containerID="cri-o://2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" gracePeriod=30 Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.734993 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877407 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877500 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877534 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877584 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877608 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877734 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877777 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877866 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.879494 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.879640 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.885972 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.886765 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.888375 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.891618 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.893130 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j" (OuterVolumeSpecName: "kube-api-access-8wq2j") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "kube-api-access-8wq2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.913480 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979553 4984 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979618 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979639 4984 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979659 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979681 4984 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979701 4984 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979721 4984 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325667 4984 generic.go:334] "Generic (PLEG): container finished" podID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerID="2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" exitCode=0 Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325725 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" event={"ID":"d3d42d7f-49ec-4169-a79d-f46ccd275e20","Type":"ContainerDied","Data":"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839"} Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325772 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" event={"ID":"d3d42d7f-49ec-4169-a79d-f46ccd275e20","Type":"ContainerDied","Data":"79705b85e33c0776d034e28c0f0671763dc639d3eee8637beef1fb06cd051685"} Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325770 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325796 4984 scope.go:117] "RemoveContainer" containerID="2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.350407 4984 scope.go:117] "RemoveContainer" containerID="2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" Jan 30 10:18:17 crc kubenswrapper[4984]: E0130 10:18:17.352044 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839\": container with ID starting with 2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839 not found: ID does not exist" containerID="2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.352088 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839"} err="failed to get container status \"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839\": rpc error: code = NotFound desc = could not find container \"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839\": container with ID starting with 2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839 not found: ID does not exist" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.383147 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.395007 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:18:18 crc kubenswrapper[4984]: I0130 10:18:18.101515 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" path="/var/lib/kubelet/pods/d3d42d7f-49ec-4169-a79d-f46ccd275e20/volumes" Jan 30 10:18:21 crc kubenswrapper[4984]: I0130 10:18:21.708113 4984 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-lv7sn container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.28:5000/healthz\": dial tcp 10.217.0.28:5000: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 10:18:21 crc kubenswrapper[4984]: I0130 10:18:21.708210 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.28:5000/healthz\": dial tcp 10.217.0.28:5000: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 30 10:20:03 crc kubenswrapper[4984]: I0130 10:20:03.000881 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:20:03 crc kubenswrapper[4984]: I0130 10:20:03.001963 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:20:33 crc kubenswrapper[4984]: I0130 10:20:33.000641 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:20:33 crc kubenswrapper[4984]: I0130 10:20:33.001313 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.000678 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.001286 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.001335 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.001939 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.002006 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4" gracePeriod=600 Jan 30 10:21:04 crc kubenswrapper[4984]: I0130 10:21:04.401168 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4" exitCode=0 Jan 30 10:21:04 crc kubenswrapper[4984]: I0130 10:21:04.401303 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4"} Jan 30 10:21:04 crc kubenswrapper[4984]: I0130 10:21:04.401803 4984 scope.go:117] "RemoveContainer" containerID="fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761" Jan 30 10:21:05 crc kubenswrapper[4984]: I0130 10:21:05.408622 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71"} Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.732533 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm"] Jan 30 10:22:40 crc kubenswrapper[4984]: E0130 10:22:40.733368 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.733384 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.733487 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.733919 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.736434 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.736968 4984 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-682sg" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.739106 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.747242 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.751754 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-rlb95"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.752649 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.755517 4984 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-trd9q" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.766044 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rlb95"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.780855 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-r7gsp"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.781744 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.783677 4984 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-s2p7s" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.792036 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-r7gsp"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.878591 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtp6t\" (UniqueName: \"kubernetes.io/projected/4a218ad6-abfb-49ac-9f07-a79d9f3bd07e-kube-api-access-qtp6t\") pod \"cert-manager-webhook-687f57d79b-r7gsp\" (UID: \"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.878658 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f1c83115-1333-4064-8217-eb2edae57d74-kube-api-access-jfdzh\") pod \"cert-manager-858654f9db-rlb95\" (UID: \"f1c83115-1333-4064-8217-eb2edae57d74\") " pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.878689 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584nw\" (UniqueName: \"kubernetes.io/projected/c7557472-15a5-48a9-8a84-bd8478d45a4b-kube-api-access-584nw\") pod \"cert-manager-cainjector-cf98fcc89-2f5gm\" (UID: \"c7557472-15a5-48a9-8a84-bd8478d45a4b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.979345 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f1c83115-1333-4064-8217-eb2edae57d74-kube-api-access-jfdzh\") pod \"cert-manager-858654f9db-rlb95\" (UID: \"f1c83115-1333-4064-8217-eb2edae57d74\") " pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.979385 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584nw\" (UniqueName: \"kubernetes.io/projected/c7557472-15a5-48a9-8a84-bd8478d45a4b-kube-api-access-584nw\") pod \"cert-manager-cainjector-cf98fcc89-2f5gm\" (UID: \"c7557472-15a5-48a9-8a84-bd8478d45a4b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.979636 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtp6t\" (UniqueName: \"kubernetes.io/projected/4a218ad6-abfb-49ac-9f07-a79d9f3bd07e-kube-api-access-qtp6t\") pod \"cert-manager-webhook-687f57d79b-r7gsp\" (UID: \"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.998925 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584nw\" (UniqueName: \"kubernetes.io/projected/c7557472-15a5-48a9-8a84-bd8478d45a4b-kube-api-access-584nw\") pod \"cert-manager-cainjector-cf98fcc89-2f5gm\" (UID: \"c7557472-15a5-48a9-8a84-bd8478d45a4b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.002184 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f1c83115-1333-4064-8217-eb2edae57d74-kube-api-access-jfdzh\") pod \"cert-manager-858654f9db-rlb95\" (UID: \"f1c83115-1333-4064-8217-eb2edae57d74\") " pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.003689 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtp6t\" (UniqueName: \"kubernetes.io/projected/4a218ad6-abfb-49ac-9f07-a79d9f3bd07e-kube-api-access-qtp6t\") pod \"cert-manager-webhook-687f57d79b-r7gsp\" (UID: \"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.050866 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.072550 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.096670 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.527044 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rlb95"] Jan 30 10:22:41 crc kubenswrapper[4984]: W0130 10:22:41.534040 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1c83115_1333_4064_8217_eb2edae57d74.slice/crio-25ab55ca4d81fccd929a7228417d20939855c8567f379f6b9d4ebe70e1d86ee6 WatchSource:0}: Error finding container 25ab55ca4d81fccd929a7228417d20939855c8567f379f6b9d4ebe70e1d86ee6: Status 404 returned error can't find the container with id 25ab55ca4d81fccd929a7228417d20939855c8567f379f6b9d4ebe70e1d86ee6 Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.536274 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.582335 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm"] Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.583127 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-r7gsp"] Jan 30 10:22:41 crc kubenswrapper[4984]: W0130 10:22:41.591634 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a218ad6_abfb_49ac_9f07_a79d9f3bd07e.slice/crio-6602222407eb08104e75739e087c8508bb9261ab8d9366bec2402b50c0162d11 WatchSource:0}: Error finding container 6602222407eb08104e75739e087c8508bb9261ab8d9366bec2402b50c0162d11: Status 404 returned error can't find the container with id 6602222407eb08104e75739e087c8508bb9261ab8d9366bec2402b50c0162d11 Jan 30 10:22:42 crc kubenswrapper[4984]: I0130 10:22:42.000155 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" event={"ID":"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e","Type":"ContainerStarted","Data":"6602222407eb08104e75739e087c8508bb9261ab8d9366bec2402b50c0162d11"} Jan 30 10:22:42 crc kubenswrapper[4984]: I0130 10:22:42.001335 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rlb95" event={"ID":"f1c83115-1333-4064-8217-eb2edae57d74","Type":"ContainerStarted","Data":"25ab55ca4d81fccd929a7228417d20939855c8567f379f6b9d4ebe70e1d86ee6"} Jan 30 10:22:42 crc kubenswrapper[4984]: I0130 10:22:42.003371 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" event={"ID":"c7557472-15a5-48a9-8a84-bd8478d45a4b","Type":"ContainerStarted","Data":"970a974528026adafa77b3fb3fde690a63f0f21700fb55408af88b1d78d47763"} Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.024913 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" event={"ID":"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e","Type":"ContainerStarted","Data":"fe5e441a51010fd90b77deec990bae6548eef663f2cac824fca41f892eed9f16"} Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.025527 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.028685 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rlb95" event={"ID":"f1c83115-1333-4064-8217-eb2edae57d74","Type":"ContainerStarted","Data":"66ede26cfd344150079cc17a5f99f7653c59b0a02a491548ee53c4c64335a390"} Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.030189 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" event={"ID":"c7557472-15a5-48a9-8a84-bd8478d45a4b","Type":"ContainerStarted","Data":"a6084e0d962926ecf26fa33dd874ef8e42195380093c74563e5f9b16b7d2c053"} Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.041378 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" podStartSLOduration=2.716812713 podStartE2EDuration="6.041359704s" podCreationTimestamp="2026-01-30 10:22:40 +0000 UTC" firstStartedPulling="2026-01-30 10:22:41.59456543 +0000 UTC m=+666.160869264" lastFinishedPulling="2026-01-30 10:22:44.919112411 +0000 UTC m=+669.485416255" observedRunningTime="2026-01-30 10:22:46.040165113 +0000 UTC m=+670.606468947" watchObservedRunningTime="2026-01-30 10:22:46.041359704 +0000 UTC m=+670.607663538" Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.056562 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-rlb95" podStartSLOduration=2.692131299 podStartE2EDuration="6.056536317s" podCreationTimestamp="2026-01-30 10:22:40 +0000 UTC" firstStartedPulling="2026-01-30 10:22:41.536022138 +0000 UTC m=+666.102325952" lastFinishedPulling="2026-01-30 10:22:44.900427156 +0000 UTC m=+669.466730970" observedRunningTime="2026-01-30 10:22:46.053296271 +0000 UTC m=+670.619600095" watchObservedRunningTime="2026-01-30 10:22:46.056536317 +0000 UTC m=+670.622840151" Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.073589 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" podStartSLOduration=2.744298642 podStartE2EDuration="6.073570938s" podCreationTimestamp="2026-01-30 10:22:40 +0000 UTC" firstStartedPulling="2026-01-30 10:22:41.586673081 +0000 UTC m=+666.152977035" lastFinishedPulling="2026-01-30 10:22:44.915945507 +0000 UTC m=+669.482249331" observedRunningTime="2026-01-30 10:22:46.070783114 +0000 UTC m=+670.637086978" watchObservedRunningTime="2026-01-30 10:22:46.073570938 +0000 UTC m=+670.639874782" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.229218 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrm2v"] Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.229999 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-controller" containerID="cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230069 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="nbdb" containerID="cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230143 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="northd" containerID="cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230197 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230231 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="sbdb" containerID="cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230283 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-node" containerID="cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230309 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-acl-logging" containerID="cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.269482 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" containerID="cri-o://ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.530313 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.532781 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovn-acl-logging/0.log" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.533451 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovn-controller/0.log" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.534070 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604147 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x29cg"] Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604391 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604405 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604416 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604423 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604431 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="nbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604437 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="nbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604450 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="northd" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604458 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="northd" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604470 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604477 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604485 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-acl-logging" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604491 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-acl-logging" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604500 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-node" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604507 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-node" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604515 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="sbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604521 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="sbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604533 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kubecfg-setup" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604540 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kubecfg-setup" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604548 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604555 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604565 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604572 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604678 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604694 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604704 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-node" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604712 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="sbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604723 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="nbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604733 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604744 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604753 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604760 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604768 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="northd" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604776 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-acl-logging" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604865 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604874 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604884 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604890 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604989 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.606748 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.624753 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.624854 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.624947 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m4h6\" (UniqueName: \"kubernetes.io/projected/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-kube-api-access-7m4h6\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625027 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-netd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625065 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-ovn\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625087 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-kubelet\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625101 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-config\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625115 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-netns\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625133 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-systemd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625152 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625167 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-etc-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625197 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-bin\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625213 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovn-node-metrics-cert\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625286 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-script-lib\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625310 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625327 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-slash\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625358 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-systemd-units\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625406 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-node-log\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625420 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-env-overrides\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625438 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-log-socket\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625455 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-var-lib-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625497 4984 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.726964 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727034 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727105 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727132 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727162 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727185 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727207 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727227 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727274 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727306 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727334 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727361 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727382 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727409 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727432 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727456 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727478 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727499 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727518 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727622 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m4h6\" (UniqueName: \"kubernetes.io/projected/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-kube-api-access-7m4h6\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727653 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-netd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727686 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-ovn\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727710 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-kubelet\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727730 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-config\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727757 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-netns\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727789 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-systemd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727821 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727853 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-etc-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727919 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-bin\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727945 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovn-node-metrics-cert\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727972 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727993 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-script-lib\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728020 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728041 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-slash\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728070 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-systemd-units\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728090 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-node-log\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728114 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-env-overrides\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728139 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-log-socket\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728158 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-var-lib-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728237 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-var-lib-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728309 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728377 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash" (OuterVolumeSpecName: "host-slash") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728405 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728432 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728457 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728483 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728620 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-netd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728664 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-ovn\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728704 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-kubelet\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728620 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log" (OuterVolumeSpecName: "node-log") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728864 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket" (OuterVolumeSpecName: "log-socket") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728855 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729840 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-config\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729300 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729337 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-netns\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729367 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-systemd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729639 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-script-lib\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729665 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-slash\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729568 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-bin\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729711 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-etc-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729712 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729733 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-log-socket\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729753 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-node-log\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728957 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.730008 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-env-overrides\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729230 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729475 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729502 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729523 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729907 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729971 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-systemd-units\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.730399 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.735584 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb" (OuterVolumeSpecName: "kube-api-access-q7vnb") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "kube-api-access-q7vnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.735871 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.737817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovn-node-metrics-cert\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.751917 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.760005 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m4h6\" (UniqueName: \"kubernetes.io/projected/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-kube-api-access-7m4h6\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830369 4984 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830432 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830452 4984 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830468 4984 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830485 4984 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830501 4984 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830517 4984 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830539 4984 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830555 4984 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830571 4984 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830588 4984 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830604 4984 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830620 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830636 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830653 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830669 4984 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830685 4984 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830702 4984 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830721 4984 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.932085 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: W0130 10:22:50.961381 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod358ad7a5_08e4_49b4_94c6_e2cdaa29d78b.slice/crio-959617f1e4cb010c00ff6bdef7ff995307e67d60eb1d6a5d66dccb76c37a0038 WatchSource:0}: Error finding container 959617f1e4cb010c00ff6bdef7ff995307e67d60eb1d6a5d66dccb76c37a0038: Status 404 returned error can't find the container with id 959617f1e4cb010c00ff6bdef7ff995307e67d60eb1d6a5d66dccb76c37a0038 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.063326 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/2.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.064228 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/1.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.064326 4984 generic.go:334] "Generic (PLEG): container finished" podID="0c5bace6-b520-4c9e-be10-a66fea4f9130" containerID="8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac" exitCode=2 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.064437 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerDied","Data":"8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.064498 4984 scope.go:117] "RemoveContainer" containerID="d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.067350 4984 scope.go:117] "RemoveContainer" containerID="8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.069481 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bnkpj_openshift-multus(0c5bace6-b520-4c9e-be10-a66fea4f9130)\"" pod="openshift-multus/multus-bnkpj" podUID="0c5bace6-b520-4c9e-be10-a66fea4f9130" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.070886 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.074734 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovn-acl-logging/0.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.075470 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovn-controller/0.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076389 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076432 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076453 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076473 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076496 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076515 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076532 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" exitCode=143 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076550 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" exitCode=143 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076639 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076694 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076725 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076751 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076779 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076804 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076831 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076854 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076870 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076887 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076903 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076917 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076932 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076945 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076959 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076972 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076993 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077016 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077032 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077046 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077061 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077075 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077088 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077103 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077117 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077130 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077145 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077165 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077188 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077205 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077222 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077236 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077290 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077309 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077323 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077338 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077353 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077369 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077390 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"9a5c5f0c87eb230fd06c2a946e269e2d2a3860384327e26e9cd419f72e754050"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077414 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077431 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077447 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077462 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077476 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077491 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077505 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077518 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077531 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077544 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077796 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.080193 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"959617f1e4cb010c00ff6bdef7ff995307e67d60eb1d6a5d66dccb76c37a0038"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.103488 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.116222 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.158752 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.174924 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrm2v"] Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.181148 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrm2v"] Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.220643 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.240061 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.258467 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.271190 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.289301 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.303587 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.319586 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.337594 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.374923 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.375441 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.375469 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} err="failed to get container status \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.375489 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.375893 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.375919 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} err="failed to get container status \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.375941 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.376281 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.376312 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} err="failed to get container status \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.376326 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.376615 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.376634 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} err="failed to get container status \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.376649 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.377027 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.377081 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} err="failed to get container status \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.377114 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.377536 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.377567 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} err="failed to get container status \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.377585 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.378083 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.378119 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} err="failed to get container status \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.378140 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.378517 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.378550 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} err="failed to get container status \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.378571 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.378967 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379010 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} err="failed to get container status \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379038 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.379389 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379414 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} err="failed to get container status \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379436 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379853 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} err="failed to get container status \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379874 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380193 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} err="failed to get container status \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380218 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380553 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} err="failed to get container status \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380572 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380846 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} err="failed to get container status \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380883 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382021 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} err="failed to get container status \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382063 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382430 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} err="failed to get container status \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382455 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382781 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} err="failed to get container status \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382801 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383090 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} err="failed to get container status \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383107 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383531 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} err="failed to get container status \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383550 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383839 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} err="failed to get container status \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383868 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.384817 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} err="failed to get container status \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.384839 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385118 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} err="failed to get container status \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385137 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385428 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} err="failed to get container status \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385443 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385693 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} err="failed to get container status \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385709 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385998 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} err="failed to get container status \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386013 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386295 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} err="failed to get container status \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386309 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386559 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} err="failed to get container status \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386588 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386882 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} err="failed to get container status \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386898 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387208 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} err="failed to get container status \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387221 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387569 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} err="failed to get container status \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387590 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387971 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} err="failed to get container status \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388012 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388386 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} err="failed to get container status \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388403 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388693 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} err="failed to get container status \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388713 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.389329 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} err="failed to get container status \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.389487 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.389853 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} err="failed to get container status \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.389871 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390172 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} err="failed to get container status \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390189 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390502 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} err="failed to get container status \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390527 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390823 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} err="failed to get container status \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390839 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.391162 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} err="failed to get container status \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.391199 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.391645 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} err="failed to get container status \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" Jan 30 10:22:52 crc kubenswrapper[4984]: I0130 10:22:52.090210 4984 generic.go:334] "Generic (PLEG): container finished" podID="358ad7a5-08e4-49b4-94c6-e2cdaa29d78b" containerID="1c4864b5740296b99fe6fc3d714405a14c1db55be936eaf266be69544a651ab8" exitCode=0 Jan 30 10:22:52 crc kubenswrapper[4984]: I0130 10:22:52.093204 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/2.log" Jan 30 10:22:52 crc kubenswrapper[4984]: I0130 10:22:52.100171 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" path="/var/lib/kubelet/pods/000a8c9a-5211-4997-8b97-d37e227c899a/volumes" Jan 30 10:22:52 crc kubenswrapper[4984]: I0130 10:22:52.103170 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerDied","Data":"1c4864b5740296b99fe6fc3d714405a14c1db55be936eaf266be69544a651ab8"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103413 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"54234ab55a96b7dfeee1dd713fedd3fc5afd2729b3deb0c7362ca6e1cc006ab0"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103673 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"96f74a19cc189667951bdf060cacadfd6379a5b239ca05987aa7c327d1efb258"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103685 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"4d32fcfe0176e4207fcccc05d2ccf8d2003ea50159f349835b538052ce16abf5"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103694 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"b3bd7ca4c574acd0c0d43f4c52c18304d702f9ddb2eec342f31a30464d04adc6"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103701 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"eb676a53e964cf459a78e7ab084c5af3ccc07714c10fb98c969c9a0a17325c1c"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103708 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"311129d4c9e851a5450202ed349d55de22f01190a81e0f40738ea385012b365d"} Jan 30 10:22:56 crc kubenswrapper[4984]: I0130 10:22:56.124554 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"0b6d4174a61d6b37eaeea67058dc60d45d9eb0d68a721c4c3b49d231bc7a8ddf"} Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.143137 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"8335331d4a0e32c46c674b569171a026f63af36bd96854156ed59179619b0361"} Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.143940 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.143959 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.143973 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.178519 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" podStartSLOduration=8.17850013 podStartE2EDuration="8.17850013s" podCreationTimestamp="2026-01-30 10:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:22:58.175335686 +0000 UTC m=+682.741639530" watchObservedRunningTime="2026-01-30 10:22:58.17850013 +0000 UTC m=+682.744803944" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.185430 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.189494 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:23:02 crc kubenswrapper[4984]: I0130 10:23:02.090428 4984 scope.go:117] "RemoveContainer" containerID="8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac" Jan 30 10:23:02 crc kubenswrapper[4984]: E0130 10:23:02.091391 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bnkpj_openshift-multus(0c5bace6-b520-4c9e-be10-a66fea4f9130)\"" pod="openshift-multus/multus-bnkpj" podUID="0c5bace6-b520-4c9e-be10-a66fea4f9130" Jan 30 10:23:14 crc kubenswrapper[4984]: I0130 10:23:14.090675 4984 scope.go:117] "RemoveContainer" containerID="8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac" Jan 30 10:23:15 crc kubenswrapper[4984]: I0130 10:23:15.258883 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/2.log" Jan 30 10:23:15 crc kubenswrapper[4984]: I0130 10:23:15.259625 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"cd92d66ad8d62c2e690c12a016dd84062559fd8d20c072207b7036f21cc178f8"} Jan 30 10:23:20 crc kubenswrapper[4984]: I0130 10:23:20.970287 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.801444 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4"] Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.803096 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.805721 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.815389 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4"] Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.985204 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.985365 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.985527 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.086904 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.087030 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.087119 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.087567 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.087826 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.128847 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.424803 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.681519 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4"] Jan 30 10:23:32 crc kubenswrapper[4984]: I0130 10:23:32.525018 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerStarted","Data":"7eabc67f69dff0b134b4f09197340826973435e1a29511d08e6f438043f9e537"} Jan 30 10:23:32 crc kubenswrapper[4984]: I0130 10:23:32.525430 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerStarted","Data":"31ee50686050c73a5686453e58e6b4254dcb0b1869f215eec4bd80b928af14cd"} Jan 30 10:23:33 crc kubenswrapper[4984]: I0130 10:23:33.000916 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:23:33 crc kubenswrapper[4984]: I0130 10:23:33.001052 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:23:33 crc kubenswrapper[4984]: I0130 10:23:33.531653 4984 generic.go:334] "Generic (PLEG): container finished" podID="790867b3-e261-4564-a2d4-ffc041c3a090" containerID="7eabc67f69dff0b134b4f09197340826973435e1a29511d08e6f438043f9e537" exitCode=0 Jan 30 10:23:33 crc kubenswrapper[4984]: I0130 10:23:33.531751 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerDied","Data":"7eabc67f69dff0b134b4f09197340826973435e1a29511d08e6f438043f9e537"} Jan 30 10:23:35 crc kubenswrapper[4984]: I0130 10:23:35.546994 4984 generic.go:334] "Generic (PLEG): container finished" podID="790867b3-e261-4564-a2d4-ffc041c3a090" containerID="8c00d1318781d55105db6d5905b0f11ee20e9abf1219646311720e3f885082b9" exitCode=0 Jan 30 10:23:35 crc kubenswrapper[4984]: I0130 10:23:35.547345 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerDied","Data":"8c00d1318781d55105db6d5905b0f11ee20e9abf1219646311720e3f885082b9"} Jan 30 10:23:36 crc kubenswrapper[4984]: I0130 10:23:36.557036 4984 generic.go:334] "Generic (PLEG): container finished" podID="790867b3-e261-4564-a2d4-ffc041c3a090" containerID="5c0faac60da3ad3a6e14e54f13902a360eb743a81ead8696ea5dd06806a7932a" exitCode=0 Jan 30 10:23:36 crc kubenswrapper[4984]: I0130 10:23:36.557127 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerDied","Data":"5c0faac60da3ad3a6e14e54f13902a360eb743a81ead8696ea5dd06806a7932a"} Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.882929 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.981212 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") pod \"790867b3-e261-4564-a2d4-ffc041c3a090\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.981390 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") pod \"790867b3-e261-4564-a2d4-ffc041c3a090\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.981424 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") pod \"790867b3-e261-4564-a2d4-ffc041c3a090\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.982526 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle" (OuterVolumeSpecName: "bundle") pod "790867b3-e261-4564-a2d4-ffc041c3a090" (UID: "790867b3-e261-4564-a2d4-ffc041c3a090"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.987317 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k" (OuterVolumeSpecName: "kube-api-access-sh57k") pod "790867b3-e261-4564-a2d4-ffc041c3a090" (UID: "790867b3-e261-4564-a2d4-ffc041c3a090"). InnerVolumeSpecName "kube-api-access-sh57k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.003425 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util" (OuterVolumeSpecName: "util") pod "790867b3-e261-4564-a2d4-ffc041c3a090" (UID: "790867b3-e261-4564-a2d4-ffc041c3a090"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.082931 4984 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") on node \"crc\" DevicePath \"\"" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.082997 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") on node \"crc\" DevicePath \"\"" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.083025 4984 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.573497 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerDied","Data":"31ee50686050c73a5686453e58e6b4254dcb0b1869f215eec4bd80b928af14cd"} Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.573557 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ee50686050c73a5686453e58e6b4254dcb0b1869f215eec4bd80b928af14cd" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.573593 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.371475 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tl42h"] Jan 30 10:23:42 crc kubenswrapper[4984]: E0130 10:23:42.372273 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="extract" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372287 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="extract" Jan 30 10:23:42 crc kubenswrapper[4984]: E0130 10:23:42.372302 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="util" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372312 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="util" Jan 30 10:23:42 crc kubenswrapper[4984]: E0130 10:23:42.372334 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="pull" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372341 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="pull" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372453 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="extract" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372875 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.375032 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-7bhdk" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.376080 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.376241 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.414019 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tl42h"] Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.540790 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2gtr\" (UniqueName: \"kubernetes.io/projected/ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b-kube-api-access-p2gtr\") pod \"nmstate-operator-646758c888-tl42h\" (UID: \"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b\") " pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.642339 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2gtr\" (UniqueName: \"kubernetes.io/projected/ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b-kube-api-access-p2gtr\") pod \"nmstate-operator-646758c888-tl42h\" (UID: \"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b\") " pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.675394 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2gtr\" (UniqueName: \"kubernetes.io/projected/ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b-kube-api-access-p2gtr\") pod \"nmstate-operator-646758c888-tl42h\" (UID: \"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b\") " pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.697423 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:43 crc kubenswrapper[4984]: I0130 10:23:43.010336 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tl42h"] Jan 30 10:23:43 crc kubenswrapper[4984]: W0130 10:23:43.034957 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce2396e1_20f3_4b5a_b3ab_4e8496d6c58b.slice/crio-8b0107022c440dd7cc72bd75ce1483d76b0674da7e6f514042587c2117bb4505 WatchSource:0}: Error finding container 8b0107022c440dd7cc72bd75ce1483d76b0674da7e6f514042587c2117bb4505: Status 404 returned error can't find the container with id 8b0107022c440dd7cc72bd75ce1483d76b0674da7e6f514042587c2117bb4505 Jan 30 10:23:43 crc kubenswrapper[4984]: I0130 10:23:43.604592 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" event={"ID":"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b","Type":"ContainerStarted","Data":"8b0107022c440dd7cc72bd75ce1483d76b0674da7e6f514042587c2117bb4505"} Jan 30 10:23:46 crc kubenswrapper[4984]: I0130 10:23:46.625483 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" event={"ID":"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b","Type":"ContainerStarted","Data":"55d130a62cc92965f51d590619badaa1790cef40a4249b321a7a8b897b8de3b7"} Jan 30 10:23:46 crc kubenswrapper[4984]: I0130 10:23:46.647163 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" podStartSLOduration=2.250248619 podStartE2EDuration="4.647139515s" podCreationTimestamp="2026-01-30 10:23:42 +0000 UTC" firstStartedPulling="2026-01-30 10:23:43.040850244 +0000 UTC m=+727.607154098" lastFinishedPulling="2026-01-30 10:23:45.43774117 +0000 UTC m=+730.004044994" observedRunningTime="2026-01-30 10:23:46.642856895 +0000 UTC m=+731.209160739" watchObservedRunningTime="2026-01-30 10:23:46.647139515 +0000 UTC m=+731.213443349" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.301988 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7x2rq"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.304379 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.308771 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-58wdc" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.330200 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.331795 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.338918 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.354632 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vh6vz"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.358461 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.389314 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7x2rq"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.393846 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.467983 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.468948 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.476600 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.476941 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vgmj8" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.477003 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487340 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-nmstate-lock\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487405 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f85w4\" (UniqueName: \"kubernetes.io/projected/88dac402-7307-465d-b5a0-61762ee570c6-kube-api-access-f85w4\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487443 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487482 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gl5x\" (UniqueName: \"kubernetes.io/projected/f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf-kube-api-access-8gl5x\") pod \"nmstate-metrics-54757c584b-7x2rq\" (UID: \"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487512 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-dbus-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487539 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpnhj\" (UniqueName: \"kubernetes.io/projected/739c7b03-ba6e-48de-a07b-6bd4206c206f-kube-api-access-wpnhj\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487593 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-ovs-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.492575 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588498 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m8tk\" (UniqueName: \"kubernetes.io/projected/471cb540-b50e-4adb-8984-65c46a7f9714-kube-api-access-8m8tk\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588597 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-ovs-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588633 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-nmstate-lock\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588681 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/471cb540-b50e-4adb-8984-65c46a7f9714-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588708 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f85w4\" (UniqueName: \"kubernetes.io/projected/88dac402-7307-465d-b5a0-61762ee570c6-kube-api-access-f85w4\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588738 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588781 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gl5x\" (UniqueName: \"kubernetes.io/projected/f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf-kube-api-access-8gl5x\") pod \"nmstate-metrics-54757c584b-7x2rq\" (UID: \"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588811 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-dbus-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588835 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/471cb540-b50e-4adb-8984-65c46a7f9714-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588872 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpnhj\" (UniqueName: \"kubernetes.io/projected/739c7b03-ba6e-48de-a07b-6bd4206c206f-kube-api-access-wpnhj\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.589295 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-ovs-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.589334 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-nmstate-lock\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: E0130 10:23:51.589681 4984 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 10:23:51 crc kubenswrapper[4984]: E0130 10:23:51.589770 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair podName:739c7b03-ba6e-48de-a07b-6bd4206c206f nodeName:}" failed. No retries permitted until 2026-01-30 10:23:52.08974378 +0000 UTC m=+736.656047594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-gnkrh" (UID: "739c7b03-ba6e-48de-a07b-6bd4206c206f") : secret "openshift-nmstate-webhook" not found Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.590142 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-dbus-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.616126 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpnhj\" (UniqueName: \"kubernetes.io/projected/739c7b03-ba6e-48de-a07b-6bd4206c206f-kube-api-access-wpnhj\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.620489 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gl5x\" (UniqueName: \"kubernetes.io/projected/f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf-kube-api-access-8gl5x\") pod \"nmstate-metrics-54757c584b-7x2rq\" (UID: \"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.624346 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f85w4\" (UniqueName: \"kubernetes.io/projected/88dac402-7307-465d-b5a0-61762ee570c6-kube-api-access-f85w4\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.677785 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.690754 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/471cb540-b50e-4adb-8984-65c46a7f9714-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.691119 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/471cb540-b50e-4adb-8984-65c46a7f9714-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.691293 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m8tk\" (UniqueName: \"kubernetes.io/projected/471cb540-b50e-4adb-8984-65c46a7f9714-kube-api-access-8m8tk\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.692551 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/471cb540-b50e-4adb-8984-65c46a7f9714-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.701993 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/471cb540-b50e-4adb-8984-65c46a7f9714-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.710553 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.711111 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66c7556f7f-xktgt"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.711876 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.733059 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m8tk\" (UniqueName: \"kubernetes.io/projected/471cb540-b50e-4adb-8984-65c46a7f9714-kube-api-access-8m8tk\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.738242 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c7556f7f-xktgt"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792747 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-oauth-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792801 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-service-ca\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792824 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxktf\" (UniqueName: \"kubernetes.io/projected/6c16c4ad-ebc2-421d-8f7c-75beca032e68-kube-api-access-xxktf\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792844 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792886 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792916 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-oauth-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792933 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-trusted-ca-bundle\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.798444 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.893944 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-oauth-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.894377 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-service-ca\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.894398 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxktf\" (UniqueName: \"kubernetes.io/projected/6c16c4ad-ebc2-421d-8f7c-75beca032e68-kube-api-access-xxktf\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895458 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-service-ca\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895533 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895542 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-oauth-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895573 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895666 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-oauth-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895697 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-trusted-ca-bundle\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.896146 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.897162 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-trusted-ca-bundle\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.901455 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.903324 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-oauth-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.912168 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxktf\" (UniqueName: \"kubernetes.io/projected/6c16c4ad-ebc2-421d-8f7c-75beca032e68-kube-api-access-xxktf\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.925515 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7x2rq"] Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.019893 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb"] Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.087221 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.099097 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.101865 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.302611 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.487047 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c7556f7f-xktgt"] Jan 30 10:23:52 crc kubenswrapper[4984]: W0130 10:23:52.495421 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c16c4ad_ebc2_421d_8f7c_75beca032e68.slice/crio-6e0cf92c1bd899d46428277a6e9e8194b39ccf61db4708e8bedfe4ba2f6ddd12 WatchSource:0}: Error finding container 6e0cf92c1bd899d46428277a6e9e8194b39ccf61db4708e8bedfe4ba2f6ddd12: Status 404 returned error can't find the container with id 6e0cf92c1bd899d46428277a6e9e8194b39ccf61db4708e8bedfe4ba2f6ddd12 Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.500652 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh"] Jan 30 10:23:52 crc kubenswrapper[4984]: W0130 10:23:52.512422 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739c7b03_ba6e_48de_a07b_6bd4206c206f.slice/crio-b5353c7196e86c069f34c32b0427b1f2a035123dd4a819dec2d19f3f17cd6826 WatchSource:0}: Error finding container b5353c7196e86c069f34c32b0427b1f2a035123dd4a819dec2d19f3f17cd6826: Status 404 returned error can't find the container with id b5353c7196e86c069f34c32b0427b1f2a035123dd4a819dec2d19f3f17cd6826 Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.663125 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" event={"ID":"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf","Type":"ContainerStarted","Data":"fbd87b4de766536ef0d7fdd0bbe39c8bc46dd273e2a7d40dbf78dd2152e9e965"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.664151 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" event={"ID":"739c7b03-ba6e-48de-a07b-6bd4206c206f","Type":"ContainerStarted","Data":"b5353c7196e86c069f34c32b0427b1f2a035123dd4a819dec2d19f3f17cd6826"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.665216 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" event={"ID":"471cb540-b50e-4adb-8984-65c46a7f9714","Type":"ContainerStarted","Data":"2a5afaa45e1f227dd21154bd2a58becdb31a3db7f13ee924c15ad570f252307c"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.666534 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c7556f7f-xktgt" event={"ID":"6c16c4ad-ebc2-421d-8f7c-75beca032e68","Type":"ContainerStarted","Data":"323c227e1096b3aad356fb2f689f550794bba69c2c568323d06590d8c77e4732"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.666559 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c7556f7f-xktgt" event={"ID":"6c16c4ad-ebc2-421d-8f7c-75beca032e68","Type":"ContainerStarted","Data":"6e0cf92c1bd899d46428277a6e9e8194b39ccf61db4708e8bedfe4ba2f6ddd12"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.667791 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vh6vz" event={"ID":"88dac402-7307-465d-b5a0-61762ee570c6","Type":"ContainerStarted","Data":"76d5fecaf3d830e7dede162ae7b0528c97fa5f1f1f84d78ac7a81d396411fe8c"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.689223 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66c7556f7f-xktgt" podStartSLOduration=1.6892003249999998 podStartE2EDuration="1.689200325s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:23:52.685322764 +0000 UTC m=+737.251626588" watchObservedRunningTime="2026-01-30 10:23:52.689200325 +0000 UTC m=+737.255504149" Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.687008 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" event={"ID":"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf","Type":"ContainerStarted","Data":"962aac82f3db923fefbe8f26abd28120cf7e49920eefcb668a7206070ae43125"} Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.690406 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" event={"ID":"739c7b03-ba6e-48de-a07b-6bd4206c206f","Type":"ContainerStarted","Data":"4fa9303285a3500acaf91a90eb757f24ec8c6ac837c63936ff10fb26403280c6"} Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.692642 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" event={"ID":"471cb540-b50e-4adb-8984-65c46a7f9714","Type":"ContainerStarted","Data":"45e29847d7467dbd735acaa0da348c30272e2c4c759469ec449a19369040b4fb"} Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.694388 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vh6vz" event={"ID":"88dac402-7307-465d-b5a0-61762ee570c6","Type":"ContainerStarted","Data":"39bf8d3e9c1a44a820271eb7b9e45df0b18994360c5f46ff62a7afc26df2954b"} Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.694491 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.735498 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" podStartSLOduration=2.496658696 podStartE2EDuration="4.735483362s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="2026-01-30 10:23:52.514783151 +0000 UTC m=+737.081086975" lastFinishedPulling="2026-01-30 10:23:54.753607817 +0000 UTC m=+739.319911641" observedRunningTime="2026-01-30 10:23:55.707595357 +0000 UTC m=+740.273899261" watchObservedRunningTime="2026-01-30 10:23:55.735483362 +0000 UTC m=+740.301787176" Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.736533 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" podStartSLOduration=2.032040191 podStartE2EDuration="4.736526336s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="2026-01-30 10:23:52.025587929 +0000 UTC m=+736.591891773" lastFinishedPulling="2026-01-30 10:23:54.730074064 +0000 UTC m=+739.296377918" observedRunningTime="2026-01-30 10:23:55.732358609 +0000 UTC m=+740.298662433" watchObservedRunningTime="2026-01-30 10:23:55.736526336 +0000 UTC m=+740.302830160" Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.749820 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vh6vz" podStartSLOduration=1.796163635 podStartE2EDuration="4.749803808s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="2026-01-30 10:23:51.776223586 +0000 UTC m=+736.342527410" lastFinishedPulling="2026-01-30 10:23:54.729863729 +0000 UTC m=+739.296167583" observedRunningTime="2026-01-30 10:23:55.746015169 +0000 UTC m=+740.312318993" watchObservedRunningTime="2026-01-30 10:23:55.749803808 +0000 UTC m=+740.316107632" Jan 30 10:23:56 crc kubenswrapper[4984]: I0130 10:23:56.717647 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:57 crc kubenswrapper[4984]: I0130 10:23:57.724033 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" event={"ID":"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf","Type":"ContainerStarted","Data":"ba907763d3b7a9ef86acddd82a6c1fb5088f5446dab84517ab6d6acc7a20c28e"} Jan 30 10:23:57 crc kubenswrapper[4984]: I0130 10:23:57.746305 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" podStartSLOduration=1.717429617 podStartE2EDuration="6.746285677s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="2026-01-30 10:23:51.93699725 +0000 UTC m=+736.503301074" lastFinishedPulling="2026-01-30 10:23:56.96585331 +0000 UTC m=+741.532157134" observedRunningTime="2026-01-30 10:23:57.740802538 +0000 UTC m=+742.307106392" watchObservedRunningTime="2026-01-30 10:23:57.746285677 +0000 UTC m=+742.312589501" Jan 30 10:24:01 crc kubenswrapper[4984]: I0130 10:24:01.755637 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.088360 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.088855 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.098092 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.779146 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.842809 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:24:03 crc kubenswrapper[4984]: I0130 10:24:03.001218 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:24:03 crc kubenswrapper[4984]: I0130 10:24:03.001301 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:24:11 crc kubenswrapper[4984]: I0130 10:24:11.944413 4984 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 10:24:12 crc kubenswrapper[4984]: I0130 10:24:12.312135 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.804539 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg"] Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.806625 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.809128 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.817407 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg"] Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.902189 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-v2prt" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" containerID="cri-o://88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" gracePeriod=15 Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.909096 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.909640 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.909677 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.011160 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.011281 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.011342 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.012142 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.012620 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.040553 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.161311 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.275894 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v2prt_6ca41dbd-8af6-43ac-af3d-b0cc6222264b/console/0.log" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.276378 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416547 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416646 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416683 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416749 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416779 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416813 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416892 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.417581 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.418114 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config" (OuterVolumeSpecName: "console-config") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.418131 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca" (OuterVolumeSpecName: "service-ca") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.418369 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.422909 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.423085 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl" (OuterVolumeSpecName: "kube-api-access-224pl") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "kube-api-access-224pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.423113 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518196 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518271 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518286 4984 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518301 4984 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518312 4984 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518323 4984 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518334 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.563817 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg"] Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.970558 4984 generic.go:334] "Generic (PLEG): container finished" podID="d796f450-1311-422f-9f63-324d0a624f15" containerID="318bec374a38e85e9c4f0c68d49ddc8cfb736bc2bb16687bf04f555f23d8ff40" exitCode=0 Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.970623 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerDied","Data":"318bec374a38e85e9c4f0c68d49ddc8cfb736bc2bb16687bf04f555f23d8ff40"} Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.970646 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerStarted","Data":"5c1b15a6a18f98241e780ba14317fddd47db3e30c840e0aaf9dd2fb132e46c50"} Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974233 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v2prt_6ca41dbd-8af6-43ac-af3d-b0cc6222264b/console/0.log" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974294 4984 generic.go:334] "Generic (PLEG): container finished" podID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerID="88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" exitCode=2 Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974325 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v2prt" event={"ID":"6ca41dbd-8af6-43ac-af3d-b0cc6222264b","Type":"ContainerDied","Data":"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd"} Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974351 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v2prt" event={"ID":"6ca41dbd-8af6-43ac-af3d-b0cc6222264b","Type":"ContainerDied","Data":"1d107edce64a981b016ac18f64e3952e99a1d1ef26bb18f85c1948ec49ead73c"} Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974368 4984 scope.go:117] "RemoveContainer" containerID="88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974963 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:24:29 crc kubenswrapper[4984]: I0130 10:24:29.002323 4984 scope.go:117] "RemoveContainer" containerID="88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" Jan 30 10:24:29 crc kubenswrapper[4984]: E0130 10:24:29.003096 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd\": container with ID starting with 88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd not found: ID does not exist" containerID="88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" Jan 30 10:24:29 crc kubenswrapper[4984]: I0130 10:24:29.003147 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd"} err="failed to get container status \"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd\": rpc error: code = NotFound desc = could not find container \"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd\": container with ID starting with 88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd not found: ID does not exist" Jan 30 10:24:29 crc kubenswrapper[4984]: I0130 10:24:29.017231 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:24:29 crc kubenswrapper[4984]: I0130 10:24:29.021159 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:24:30 crc kubenswrapper[4984]: I0130 10:24:30.102451 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" path="/var/lib/kubelet/pods/6ca41dbd-8af6-43ac-af3d-b0cc6222264b/volumes" Jan 30 10:24:30 crc kubenswrapper[4984]: I0130 10:24:30.996812 4984 generic.go:334] "Generic (PLEG): container finished" podID="d796f450-1311-422f-9f63-324d0a624f15" containerID="632fe958e127244654dbcbdedf5df666fd52a8a3465d217c273e76221b35a0a3" exitCode=0 Jan 30 10:24:30 crc kubenswrapper[4984]: I0130 10:24:30.996889 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerDied","Data":"632fe958e127244654dbcbdedf5df666fd52a8a3465d217c273e76221b35a0a3"} Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.149239 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:31 crc kubenswrapper[4984]: E0130 10:24:31.149585 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.149600 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.149747 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.150971 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.155982 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.257353 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.257747 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.257855 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359061 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359120 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359164 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359603 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359757 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.379450 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.476393 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.707121 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.003600 4984 generic.go:334] "Generic (PLEG): container finished" podID="d796f450-1311-422f-9f63-324d0a624f15" containerID="a63b21f2008c73a509a1ca055dd42b3084bd6370b089ea5cdc11723406872d0a" exitCode=0 Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.003655 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerDied","Data":"a63b21f2008c73a509a1ca055dd42b3084bd6370b089ea5cdc11723406872d0a"} Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.005122 4984 generic.go:334] "Generic (PLEG): container finished" podID="a0edede8-30cb-4add-9a06-830084c7c57b" containerID="0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef" exitCode=0 Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.005148 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerDied","Data":"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef"} Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.005161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerStarted","Data":"a552366bd60a088f98781cc480bddc1054116923b4637e5e3978f79539b7bf40"} Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.001091 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.001549 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.001639 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.002400 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.002497 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71" gracePeriod=600 Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.016355 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerStarted","Data":"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1"} Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.455524 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.590608 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") pod \"d796f450-1311-422f-9f63-324d0a624f15\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.590716 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") pod \"d796f450-1311-422f-9f63-324d0a624f15\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.590757 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") pod \"d796f450-1311-422f-9f63-324d0a624f15\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.592298 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle" (OuterVolumeSpecName: "bundle") pod "d796f450-1311-422f-9f63-324d0a624f15" (UID: "d796f450-1311-422f-9f63-324d0a624f15"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.599477 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j" (OuterVolumeSpecName: "kube-api-access-vn94j") pod "d796f450-1311-422f-9f63-324d0a624f15" (UID: "d796f450-1311-422f-9f63-324d0a624f15"). InnerVolumeSpecName "kube-api-access-vn94j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.604940 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util" (OuterVolumeSpecName: "util") pod "d796f450-1311-422f-9f63-324d0a624f15" (UID: "d796f450-1311-422f-9f63-324d0a624f15"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.692664 4984 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.692705 4984 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.692722 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.027002 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerDied","Data":"5c1b15a6a18f98241e780ba14317fddd47db3e30c840e0aaf9dd2fb132e46c50"} Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.027521 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1b15a6a18f98241e780ba14317fddd47db3e30c840e0aaf9dd2fb132e46c50" Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.027037 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.029901 4984 generic.go:334] "Generic (PLEG): container finished" podID="a0edede8-30cb-4add-9a06-830084c7c57b" containerID="47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1" exitCode=0 Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.029979 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerDied","Data":"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1"} Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.034293 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71" exitCode=0 Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.034332 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71"} Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.034362 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e"} Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.034381 4984 scope.go:117] "RemoveContainer" containerID="cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4" Jan 30 10:24:35 crc kubenswrapper[4984]: I0130 10:24:35.043295 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerStarted","Data":"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da"} Jan 30 10:24:35 crc kubenswrapper[4984]: I0130 10:24:35.064648 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pkwf8" podStartSLOduration=1.296489039 podStartE2EDuration="4.064635489s" podCreationTimestamp="2026-01-30 10:24:31 +0000 UTC" firstStartedPulling="2026-01-30 10:24:32.006743769 +0000 UTC m=+776.573047583" lastFinishedPulling="2026-01-30 10:24:34.774890209 +0000 UTC m=+779.341194033" observedRunningTime="2026-01-30 10:24:35.062583881 +0000 UTC m=+779.628887725" watchObservedRunningTime="2026-01-30 10:24:35.064635489 +0000 UTC m=+779.630939313" Jan 30 10:24:41 crc kubenswrapper[4984]: I0130 10:24:41.476745 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:41 crc kubenswrapper[4984]: I0130 10:24:41.477288 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:41 crc kubenswrapper[4984]: I0130 10:24:41.553976 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:42 crc kubenswrapper[4984]: I0130 10:24:42.138736 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:42 crc kubenswrapper[4984]: I0130 10:24:42.739587 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791205 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk"] Jan 30 10:24:43 crc kubenswrapper[4984]: E0130 10:24:43.791662 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="pull" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791674 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="pull" Jan 30 10:24:43 crc kubenswrapper[4984]: E0130 10:24:43.791684 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="util" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791689 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="util" Jan 30 10:24:43 crc kubenswrapper[4984]: E0130 10:24:43.791696 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="extract" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791703 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="extract" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791801 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="extract" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.792216 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.794314 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.794345 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.794479 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-wllvq" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.794841 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.797086 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.849950 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-apiservice-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.850046 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8cz4\" (UniqueName: \"kubernetes.io/projected/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-kube-api-access-s8cz4\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.850086 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-webhook-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.854544 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk"] Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.951273 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-webhook-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.951673 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-apiservice-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.951823 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8cz4\" (UniqueName: \"kubernetes.io/projected/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-kube-api-access-s8cz4\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.957369 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-webhook-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.957502 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-apiservice-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.968428 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8cz4\" (UniqueName: \"kubernetes.io/projected/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-kube-api-access-s8cz4\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.101052 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pkwf8" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="registry-server" containerID="cri-o://785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" gracePeriod=2 Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.117455 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4"] Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.118122 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.118702 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.121129 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dr6xh" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.121169 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.121359 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.151581 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4"] Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.154894 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-apiservice-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.154946 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-webhook-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.154979 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5btn5\" (UniqueName: \"kubernetes.io/projected/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-kube-api-access-5btn5\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.255560 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-apiservice-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.255613 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-webhook-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.255658 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5btn5\" (UniqueName: \"kubernetes.io/projected/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-kube-api-access-5btn5\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.271264 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-webhook-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.274791 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-apiservice-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.282008 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5btn5\" (UniqueName: \"kubernetes.io/projected/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-kube-api-access-5btn5\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.483518 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.541407 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk"] Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.559731 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") pod \"a0edede8-30cb-4add-9a06-830084c7c57b\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.561157 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") pod \"a0edede8-30cb-4add-9a06-830084c7c57b\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.561232 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") pod \"a0edede8-30cb-4add-9a06-830084c7c57b\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.561915 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities" (OuterVolumeSpecName: "utilities") pod "a0edede8-30cb-4add-9a06-830084c7c57b" (UID: "a0edede8-30cb-4add-9a06-830084c7c57b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.565179 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp" (OuterVolumeSpecName: "kube-api-access-w8znp") pod "a0edede8-30cb-4add-9a06-830084c7c57b" (UID: "a0edede8-30cb-4add-9a06-830084c7c57b"). InnerVolumeSpecName "kube-api-access-w8znp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.571490 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.662881 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.662914 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.671603 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0edede8-30cb-4add-9a06-830084c7c57b" (UID: "a0edede8-30cb-4add-9a06-830084c7c57b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.766679 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.017906 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4"] Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.114729 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" event={"ID":"fb5cf2c1-4334-4aee-9f94-2f1c2797b484","Type":"ContainerStarted","Data":"78a0264aeda8cbf2c2c638183fa3a8774b559697e94e44af00c7b09fb492be40"} Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.119445 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" event={"ID":"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05","Type":"ContainerStarted","Data":"7016d895b04fe74310f5b513c915fe3094063d531d69d40350be412797189e23"} Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.122783 4984 generic.go:334] "Generic (PLEG): container finished" podID="a0edede8-30cb-4add-9a06-830084c7c57b" containerID="785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" exitCode=0 Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.122917 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.123036 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerDied","Data":"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da"} Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.123132 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerDied","Data":"a552366bd60a088f98781cc480bddc1054116923b4637e5e3978f79539b7bf40"} Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.123185 4984 scope.go:117] "RemoveContainer" containerID="785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.151910 4984 scope.go:117] "RemoveContainer" containerID="47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.159882 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.163767 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.175855 4984 scope.go:117] "RemoveContainer" containerID="0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.188346 4984 scope.go:117] "RemoveContainer" containerID="785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" Jan 30 10:24:45 crc kubenswrapper[4984]: E0130 10:24:45.188804 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da\": container with ID starting with 785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da not found: ID does not exist" containerID="785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.188843 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da"} err="failed to get container status \"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da\": rpc error: code = NotFound desc = could not find container \"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da\": container with ID starting with 785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da not found: ID does not exist" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.188870 4984 scope.go:117] "RemoveContainer" containerID="47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1" Jan 30 10:24:45 crc kubenswrapper[4984]: E0130 10:24:45.189228 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1\": container with ID starting with 47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1 not found: ID does not exist" containerID="47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.189274 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1"} err="failed to get container status \"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1\": rpc error: code = NotFound desc = could not find container \"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1\": container with ID starting with 47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1 not found: ID does not exist" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.189292 4984 scope.go:117] "RemoveContainer" containerID="0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef" Jan 30 10:24:45 crc kubenswrapper[4984]: E0130 10:24:45.189646 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef\": container with ID starting with 0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef not found: ID does not exist" containerID="0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.189669 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef"} err="failed to get container status \"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef\": rpc error: code = NotFound desc = could not find container \"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef\": container with ID starting with 0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef not found: ID does not exist" Jan 30 10:24:46 crc kubenswrapper[4984]: I0130 10:24:46.111114 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" path="/var/lib/kubelet/pods/a0edede8-30cb-4add-9a06-830084c7c57b/volumes" Jan 30 10:24:48 crc kubenswrapper[4984]: I0130 10:24:48.157367 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" event={"ID":"fb5cf2c1-4334-4aee-9f94-2f1c2797b484","Type":"ContainerStarted","Data":"c9d7cb153da5f792a55521dbeb275f836c6114611a4cf84e1eefcd63a9391730"} Jan 30 10:24:48 crc kubenswrapper[4984]: I0130 10:24:48.158582 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:48 crc kubenswrapper[4984]: I0130 10:24:48.185156 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" podStartSLOduration=2.312029655 podStartE2EDuration="5.18514091s" podCreationTimestamp="2026-01-30 10:24:43 +0000 UTC" firstStartedPulling="2026-01-30 10:24:44.54618209 +0000 UTC m=+789.112485914" lastFinishedPulling="2026-01-30 10:24:47.419293345 +0000 UTC m=+791.985597169" observedRunningTime="2026-01-30 10:24:48.180863244 +0000 UTC m=+792.747167068" watchObservedRunningTime="2026-01-30 10:24:48.18514091 +0000 UTC m=+792.751444724" Jan 30 10:24:51 crc kubenswrapper[4984]: I0130 10:24:51.180885 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" event={"ID":"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05","Type":"ContainerStarted","Data":"b1c9e3ad29f0878d2568c3c76b1ee26e4c59f762a52161e077b3e9a4edc47c8b"} Jan 30 10:24:51 crc kubenswrapper[4984]: I0130 10:24:51.182608 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:51 crc kubenswrapper[4984]: I0130 10:24:51.212468 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" podStartSLOduration=2.137636963 podStartE2EDuration="7.212445086s" podCreationTimestamp="2026-01-30 10:24:44 +0000 UTC" firstStartedPulling="2026-01-30 10:24:45.048725987 +0000 UTC m=+789.615029811" lastFinishedPulling="2026-01-30 10:24:50.1235341 +0000 UTC m=+794.689837934" observedRunningTime="2026-01-30 10:24:51.207521643 +0000 UTC m=+795.773825487" watchObservedRunningTime="2026-01-30 10:24:51.212445086 +0000 UTC m=+795.778748930" Jan 30 10:25:04 crc kubenswrapper[4984]: I0130 10:25:04.580834 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.122881 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948469 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z7vlt"] Jan 30 10:25:24 crc kubenswrapper[4984]: E0130 10:25:24.948705 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="extract-content" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948720 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="extract-content" Jan 30 10:25:24 crc kubenswrapper[4984]: E0130 10:25:24.948734 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="registry-server" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948741 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="registry-server" Jan 30 10:25:24 crc kubenswrapper[4984]: E0130 10:25:24.948762 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="extract-utilities" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948771 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="extract-utilities" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948884 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="registry-server" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.951143 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.953025 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.953370 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.953544 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gkwrr" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.973833 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw"] Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.974632 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.976575 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.996981 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037365 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6gbr\" (UniqueName: \"kubernetes.io/projected/997946ae-eb76-422f-9954-d9dae3ca8184-kube-api-access-k6gbr\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037419 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-conf\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037451 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e54bb11-7cfb-4840-b861-bd6d184c36f4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037477 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-reloader\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037541 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-metrics\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037579 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-sockets\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037603 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037678 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5pdf\" (UniqueName: \"kubernetes.io/projected/7e54bb11-7cfb-4840-b861-bd6d184c36f4-kube-api-access-x5pdf\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037704 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/997946ae-eb76-422f-9954-d9dae3ca8184-frr-startup\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.073890 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wc8c7"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.074795 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.077412 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-4tngn"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.078311 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.079325 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.079352 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.079396 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.079352 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.081060 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-czq8s" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.084301 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-4tngn"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.141956 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6gbr\" (UniqueName: \"kubernetes.io/projected/997946ae-eb76-422f-9954-d9dae3ca8184-kube-api-access-k6gbr\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142325 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-conf\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142358 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e54bb11-7cfb-4840-b861-bd6d184c36f4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142403 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-reloader\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142531 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142764 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-conf\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142809 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-reloader\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142885 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/07684256-0759-426a-9ba0-40514aa3e7ac-metallb-excludel2\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142912 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-755js\" (UniqueName: \"kubernetes.io/projected/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-kube-api-access-755js\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142963 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-metrics\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143048 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-metrics-certs\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143078 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-sockets\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143105 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.143197 4984 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.143261 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs podName:997946ae-eb76-422f-9954-d9dae3ca8184 nodeName:}" failed. No retries permitted until 2026-01-30 10:25:25.643226747 +0000 UTC m=+830.209530571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs") pod "frr-k8s-z7vlt" (UID: "997946ae-eb76-422f-9954-d9dae3ca8184") : secret "frr-k8s-certs-secret" not found Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143197 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5pdf\" (UniqueName: \"kubernetes.io/projected/7e54bb11-7cfb-4840-b861-bd6d184c36f4-kube-api-access-x5pdf\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143297 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/997946ae-eb76-422f-9954-d9dae3ca8184-frr-startup\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143324 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-cert\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143347 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-metrics-certs\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143396 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmmvg\" (UniqueName: \"kubernetes.io/projected/07684256-0759-426a-9ba0-40514aa3e7ac-kube-api-access-dmmvg\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143612 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-metrics\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143641 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-sockets\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.144340 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/997946ae-eb76-422f-9954-d9dae3ca8184-frr-startup\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.163068 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6gbr\" (UniqueName: \"kubernetes.io/projected/997946ae-eb76-422f-9954-d9dae3ca8184-kube-api-access-k6gbr\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.163720 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e54bb11-7cfb-4840-b861-bd6d184c36f4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.165236 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5pdf\" (UniqueName: \"kubernetes.io/projected/7e54bb11-7cfb-4840-b861-bd6d184c36f4-kube-api-access-x5pdf\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244066 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmmvg\" (UniqueName: \"kubernetes.io/projected/07684256-0759-426a-9ba0-40514aa3e7ac-kube-api-access-dmmvg\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244129 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244152 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/07684256-0759-426a-9ba0-40514aa3e7ac-metallb-excludel2\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244184 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-755js\" (UniqueName: \"kubernetes.io/projected/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-kube-api-access-755js\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244210 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-metrics-certs\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244280 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-cert\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244295 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-metrics-certs\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.244700 4984 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.244793 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist podName:07684256-0759-426a-9ba0-40514aa3e7ac nodeName:}" failed. No retries permitted until 2026-01-30 10:25:25.744768821 +0000 UTC m=+830.311072705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist") pod "speaker-wc8c7" (UID: "07684256-0759-426a-9ba0-40514aa3e7ac") : secret "metallb-memberlist" not found Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.245316 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/07684256-0759-426a-9ba0-40514aa3e7ac-metallb-excludel2\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.248386 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-cert\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.255749 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-metrics-certs\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.256747 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-metrics-certs\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.259434 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmmvg\" (UniqueName: \"kubernetes.io/projected/07684256-0759-426a-9ba0-40514aa3e7ac-kube-api-access-dmmvg\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.265962 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-755js\" (UniqueName: \"kubernetes.io/projected/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-kube-api-access-755js\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.288798 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.486964 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.491388 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.648618 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.653548 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.672377 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-4tngn"] Jan 30 10:25:25 crc kubenswrapper[4984]: W0130 10:25:25.680864 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ae05bf6_d99c_4fb1_9780_20249ec78e1e.slice/crio-e575a46b339899c2fb7a4c9c8e777906d63da54e5559550fa1b40682739cc554 WatchSource:0}: Error finding container e575a46b339899c2fb7a4c9c8e777906d63da54e5559550fa1b40682739cc554: Status 404 returned error can't find the container with id e575a46b339899c2fb7a4c9c8e777906d63da54e5559550fa1b40682739cc554 Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.749377 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.749613 4984 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.749673 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist podName:07684256-0759-426a-9ba0-40514aa3e7ac nodeName:}" failed. No retries permitted until 2026-01-30 10:25:26.749657251 +0000 UTC m=+831.315961085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist") pod "speaker-wc8c7" (UID: "07684256-0759-426a-9ba0-40514aa3e7ac") : secret "metallb-memberlist" not found Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.868331 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.412944 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"7a089b9f7e74c897b0a73d5388e438e3794a31cab9612c5cf8edf519b984546c"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.414967 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" event={"ID":"7e54bb11-7cfb-4840-b861-bd6d184c36f4","Type":"ContainerStarted","Data":"58e004b6ed388421432c0776f6f27705d25295c834b84e458409dc8ca7bcd3ff"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.416602 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4tngn" event={"ID":"2ae05bf6-d99c-4fb1-9780-20249ec78e1e","Type":"ContainerStarted","Data":"7f6fe00cbaf737c0f731065c567bbe878a8bf4d5bdef0a143440acd1fe1daf23"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.416633 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4tngn" event={"ID":"2ae05bf6-d99c-4fb1-9780-20249ec78e1e","Type":"ContainerStarted","Data":"7610d1a071e562b3a42e48c46355e5d7df1566783312bfa1e11e3d093e53e2d4"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.416647 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4tngn" event={"ID":"2ae05bf6-d99c-4fb1-9780-20249ec78e1e","Type":"ContainerStarted","Data":"e575a46b339899c2fb7a4c9c8e777906d63da54e5559550fa1b40682739cc554"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.416808 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.448697 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-4tngn" podStartSLOduration=1.448669375 podStartE2EDuration="1.448669375s" podCreationTimestamp="2026-01-30 10:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:25:26.440219696 +0000 UTC m=+831.006523530" watchObservedRunningTime="2026-01-30 10:25:26.448669375 +0000 UTC m=+831.014973239" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.763268 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.770005 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.948877 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wc8c7" Jan 30 10:25:27 crc kubenswrapper[4984]: I0130 10:25:27.434375 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wc8c7" event={"ID":"07684256-0759-426a-9ba0-40514aa3e7ac","Type":"ContainerStarted","Data":"9f977a546441f73297e073da518ebb9c4bf6a4f48b630b6b222f2b2c46a10047"} Jan 30 10:25:27 crc kubenswrapper[4984]: I0130 10:25:27.434410 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wc8c7" event={"ID":"07684256-0759-426a-9ba0-40514aa3e7ac","Type":"ContainerStarted","Data":"de15cff346b174cacc26d1ce8dfcbb6ab999dc4665410ef5c5dc4df94dbcbb8e"} Jan 30 10:25:28 crc kubenswrapper[4984]: I0130 10:25:28.449397 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wc8c7" event={"ID":"07684256-0759-426a-9ba0-40514aa3e7ac","Type":"ContainerStarted","Data":"96fe113673ee6948d85f2d9469976880f5f0a6c26078f90e457ae0e095f6dc53"} Jan 30 10:25:28 crc kubenswrapper[4984]: I0130 10:25:28.449871 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wc8c7" Jan 30 10:25:28 crc kubenswrapper[4984]: I0130 10:25:28.471071 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wc8c7" podStartSLOduration=3.471047651 podStartE2EDuration="3.471047651s" podCreationTimestamp="2026-01-30 10:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:25:28.469604702 +0000 UTC m=+833.035908536" watchObservedRunningTime="2026-01-30 10:25:28.471047651 +0000 UTC m=+833.037351475" Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.481560 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" event={"ID":"7e54bb11-7cfb-4840-b861-bd6d184c36f4","Type":"ContainerStarted","Data":"060f3efffab5f8e3e74e6284f9bfc38965e01dbd990aa488e26f3bde4989e5fd"} Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.482118 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.482780 4984 generic.go:334] "Generic (PLEG): container finished" podID="997946ae-eb76-422f-9954-d9dae3ca8184" containerID="46e8f02958a2a8983a48d8c28e1e83c1cc2948423bbc66e4e6d490013fd69616" exitCode=0 Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.482803 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerDied","Data":"46e8f02958a2a8983a48d8c28e1e83c1cc2948423bbc66e4e6d490013fd69616"} Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.498218 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" podStartSLOduration=1.8635116489999999 podStartE2EDuration="9.498202873s" podCreationTimestamp="2026-01-30 10:25:24 +0000 UTC" firstStartedPulling="2026-01-30 10:25:25.501085131 +0000 UTC m=+830.067388955" lastFinishedPulling="2026-01-30 10:25:33.135776365 +0000 UTC m=+837.702080179" observedRunningTime="2026-01-30 10:25:33.496481326 +0000 UTC m=+838.062785150" watchObservedRunningTime="2026-01-30 10:25:33.498202873 +0000 UTC m=+838.064506697" Jan 30 10:25:34 crc kubenswrapper[4984]: I0130 10:25:34.490085 4984 generic.go:334] "Generic (PLEG): container finished" podID="997946ae-eb76-422f-9954-d9dae3ca8184" containerID="0a6702565104b0cce46eedcc9d14ad5cc67bdf5bf4b717e8689bbff7af838598" exitCode=0 Jan 30 10:25:34 crc kubenswrapper[4984]: I0130 10:25:34.490171 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerDied","Data":"0a6702565104b0cce46eedcc9d14ad5cc67bdf5bf4b717e8689bbff7af838598"} Jan 30 10:25:35 crc kubenswrapper[4984]: I0130 10:25:35.496057 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:35 crc kubenswrapper[4984]: I0130 10:25:35.501749 4984 generic.go:334] "Generic (PLEG): container finished" podID="997946ae-eb76-422f-9954-d9dae3ca8184" containerID="ffe6d4d10770c846f664d56d694daf14cf564872514e6f4d91147c082b73bb71" exitCode=0 Jan 30 10:25:35 crc kubenswrapper[4984]: I0130 10:25:35.501800 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerDied","Data":"ffe6d4d10770c846f664d56d694daf14cf564872514e6f4d91147c082b73bb71"} Jan 30 10:25:36 crc kubenswrapper[4984]: I0130 10:25:36.511459 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"7fe4222c13517cb85341dcab25b4281c9ea010c138f0355bb412237459bc5ccd"} Jan 30 10:25:36 crc kubenswrapper[4984]: I0130 10:25:36.511823 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"9795cf593ee73066007d0e4bb5869bd88b22d5d2c6081171c3b77f4fb3e815c4"} Jan 30 10:25:36 crc kubenswrapper[4984]: I0130 10:25:36.511837 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"7f89ec3198fc7bddcb2c991fdd70deeb6373ebec08b366cde22fb079f6a9275b"} Jan 30 10:25:36 crc kubenswrapper[4984]: I0130 10:25:36.511847 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"f914d87a23774e7d017510e56064082fbf680e91f94aac36037096ed2b2bc88a"} Jan 30 10:25:37 crc kubenswrapper[4984]: I0130 10:25:37.535525 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"2c4068053574c030f20690a866a7a73aabf0715254899897c3146aab79b13864"} Jan 30 10:25:37 crc kubenswrapper[4984]: I0130 10:25:37.535960 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"caf74a790daf2528309bd05f398d4061b6b673097f654ca1e2625f92790bfefc"} Jan 30 10:25:37 crc kubenswrapper[4984]: I0130 10:25:37.536473 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:37 crc kubenswrapper[4984]: I0130 10:25:37.577562 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z7vlt" podStartSLOduration=6.41679198 podStartE2EDuration="13.577545804s" podCreationTimestamp="2026-01-30 10:25:24 +0000 UTC" firstStartedPulling="2026-01-30 10:25:25.969679527 +0000 UTC m=+830.535983351" lastFinishedPulling="2026-01-30 10:25:33.130433331 +0000 UTC m=+837.696737175" observedRunningTime="2026-01-30 10:25:37.573034282 +0000 UTC m=+842.139338186" watchObservedRunningTime="2026-01-30 10:25:37.577545804 +0000 UTC m=+842.143849638" Jan 30 10:25:40 crc kubenswrapper[4984]: I0130 10:25:40.869579 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:40 crc kubenswrapper[4984]: I0130 10:25:40.940105 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:45 crc kubenswrapper[4984]: I0130 10:25:45.295101 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:45 crc kubenswrapper[4984]: I0130 10:25:45.875761 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:46 crc kubenswrapper[4984]: I0130 10:25:46.953101 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wc8c7" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.030323 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.031748 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.036355 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.036496 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7274q" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.041346 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.047612 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.213548 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") pod \"openstack-operator-index-npz6v\" (UID: \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\") " pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.315112 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") pod \"openstack-operator-index-npz6v\" (UID: \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\") " pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.337013 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") pod \"openstack-operator-index-npz6v\" (UID: \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\") " pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.356697 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.589240 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:50 crc kubenswrapper[4984]: W0130 10:25:50.602077 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod215dcee8_cadb_424f_98c5_ee7ebcf45d3a.slice/crio-c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f WatchSource:0}: Error finding container c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f: Status 404 returned error can't find the container with id c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.630138 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npz6v" event={"ID":"215dcee8-cadb-424f-98c5-ee7ebcf45d3a","Type":"ContainerStarted","Data":"c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f"} Jan 30 10:25:53 crc kubenswrapper[4984]: I0130 10:25:53.410438 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:53 crc kubenswrapper[4984]: I0130 10:25:53.656608 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npz6v" event={"ID":"215dcee8-cadb-424f-98c5-ee7ebcf45d3a","Type":"ContainerStarted","Data":"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041"} Jan 30 10:25:53 crc kubenswrapper[4984]: I0130 10:25:53.677496 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-npz6v" podStartSLOduration=1.433583782 podStartE2EDuration="3.67743793s" podCreationTimestamp="2026-01-30 10:25:50 +0000 UTC" firstStartedPulling="2026-01-30 10:25:50.607411754 +0000 UTC m=+855.173715578" lastFinishedPulling="2026-01-30 10:25:52.851265892 +0000 UTC m=+857.417569726" observedRunningTime="2026-01-30 10:25:53.673698052 +0000 UTC m=+858.240001876" watchObservedRunningTime="2026-01-30 10:25:53.67743793 +0000 UTC m=+858.243741764" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.024846 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nqgjv"] Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.026991 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.037003 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nqgjv"] Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.083045 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58jxf\" (UniqueName: \"kubernetes.io/projected/be54871d-c3f5-40bc-b6cd-63602755ca51-kube-api-access-58jxf\") pod \"openstack-operator-index-nqgjv\" (UID: \"be54871d-c3f5-40bc-b6cd-63602755ca51\") " pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.185591 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58jxf\" (UniqueName: \"kubernetes.io/projected/be54871d-c3f5-40bc-b6cd-63602755ca51-kube-api-access-58jxf\") pod \"openstack-operator-index-nqgjv\" (UID: \"be54871d-c3f5-40bc-b6cd-63602755ca51\") " pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.211053 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58jxf\" (UniqueName: \"kubernetes.io/projected/be54871d-c3f5-40bc-b6cd-63602755ca51-kube-api-access-58jxf\") pod \"openstack-operator-index-nqgjv\" (UID: \"be54871d-c3f5-40bc-b6cd-63602755ca51\") " pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.364056 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.663507 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-npz6v" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerName="registry-server" containerID="cri-o://e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" gracePeriod=2 Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.838936 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nqgjv"] Jan 30 10:25:54 crc kubenswrapper[4984]: W0130 10:25:54.840179 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe54871d_c3f5_40bc_b6cd_63602755ca51.slice/crio-1c972b9aa2521f9c5121cdaa9d0cefcc232c83f6fe5bfc33821ed9d78dc3d66d WatchSource:0}: Error finding container 1c972b9aa2521f9c5121cdaa9d0cefcc232c83f6fe5bfc33821ed9d78dc3d66d: Status 404 returned error can't find the container with id 1c972b9aa2521f9c5121cdaa9d0cefcc232c83f6fe5bfc33821ed9d78dc3d66d Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.026822 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.098420 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") pod \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\" (UID: \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\") " Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.106184 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc" (OuterVolumeSpecName: "kube-api-access-2wgjc") pod "215dcee8-cadb-424f-98c5-ee7ebcf45d3a" (UID: "215dcee8-cadb-424f-98c5-ee7ebcf45d3a"). InnerVolumeSpecName "kube-api-access-2wgjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.200921 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") on node \"crc\" DevicePath \"\"" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.672875 4984 generic.go:334] "Generic (PLEG): container finished" podID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerID="e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" exitCode=0 Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.672988 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.672982 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npz6v" event={"ID":"215dcee8-cadb-424f-98c5-ee7ebcf45d3a","Type":"ContainerDied","Data":"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041"} Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.673186 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npz6v" event={"ID":"215dcee8-cadb-424f-98c5-ee7ebcf45d3a","Type":"ContainerDied","Data":"c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f"} Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.673229 4984 scope.go:117] "RemoveContainer" containerID="e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.675033 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nqgjv" event={"ID":"be54871d-c3f5-40bc-b6cd-63602755ca51","Type":"ContainerStarted","Data":"77909d946bdbcbc1723a2e7efd64b09a8447e749ff9b1ccedd9874d920cf3f82"} Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.675099 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nqgjv" event={"ID":"be54871d-c3f5-40bc-b6cd-63602755ca51","Type":"ContainerStarted","Data":"1c972b9aa2521f9c5121cdaa9d0cefcc232c83f6fe5bfc33821ed9d78dc3d66d"} Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.703286 4984 scope.go:117] "RemoveContainer" containerID="e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" Jan 30 10:25:55 crc kubenswrapper[4984]: E0130 10:25:55.705239 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041\": container with ID starting with e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041 not found: ID does not exist" containerID="e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.705477 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041"} err="failed to get container status \"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041\": rpc error: code = NotFound desc = could not find container \"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041\": container with ID starting with e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041 not found: ID does not exist" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.719235 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nqgjv" podStartSLOduration=1.652136274 podStartE2EDuration="1.71921515s" podCreationTimestamp="2026-01-30 10:25:54 +0000 UTC" firstStartedPulling="2026-01-30 10:25:54.847234465 +0000 UTC m=+859.413538289" lastFinishedPulling="2026-01-30 10:25:54.914313331 +0000 UTC m=+859.480617165" observedRunningTime="2026-01-30 10:25:55.704709068 +0000 UTC m=+860.271012922" watchObservedRunningTime="2026-01-30 10:25:55.71921515 +0000 UTC m=+860.285518974" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.721722 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.726769 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:56 crc kubenswrapper[4984]: I0130 10:25:56.104443 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" path="/var/lib/kubelet/pods/215dcee8-cadb-424f-98c5-ee7ebcf45d3a/volumes" Jan 30 10:26:04 crc kubenswrapper[4984]: I0130 10:26:04.364513 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:26:04 crc kubenswrapper[4984]: I0130 10:26:04.365015 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:26:04 crc kubenswrapper[4984]: I0130 10:26:04.401540 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:26:04 crc kubenswrapper[4984]: I0130 10:26:04.788930 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.377915 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw"] Jan 30 10:26:11 crc kubenswrapper[4984]: E0130 10:26:11.378870 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerName="registry-server" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.378888 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerName="registry-server" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.379053 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerName="registry-server" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.380467 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.424619 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw"] Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.424895 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jq7m4" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.439396 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.439444 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.439496 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.540916 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.541021 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.541047 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.541807 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.541858 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.565558 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.736474 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:12 crc kubenswrapper[4984]: I0130 10:26:12.025668 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw"] Jan 30 10:26:12 crc kubenswrapper[4984]: I0130 10:26:12.842488 4984 generic.go:334] "Generic (PLEG): container finished" podID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerID="ba83d11e0beab8264fb47a35d00a82c25ea68f771b87cc983330039260a2defc" exitCode=0 Jan 30 10:26:12 crc kubenswrapper[4984]: I0130 10:26:12.842548 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerDied","Data":"ba83d11e0beab8264fb47a35d00a82c25ea68f771b87cc983330039260a2defc"} Jan 30 10:26:12 crc kubenswrapper[4984]: I0130 10:26:12.842946 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerStarted","Data":"8e94377a18d5da81d679783bfd83611dd6a34e5dfaf77dfc19648d278742cc3f"} Jan 30 10:26:13 crc kubenswrapper[4984]: I0130 10:26:13.853848 4984 generic.go:334] "Generic (PLEG): container finished" podID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerID="74f318d1b9c26527168de678b10eef5ee23b24fcedab32a1f391fc17580575fd" exitCode=0 Jan 30 10:26:13 crc kubenswrapper[4984]: I0130 10:26:13.853920 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerDied","Data":"74f318d1b9c26527168de678b10eef5ee23b24fcedab32a1f391fc17580575fd"} Jan 30 10:26:14 crc kubenswrapper[4984]: I0130 10:26:14.864944 4984 generic.go:334] "Generic (PLEG): container finished" podID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerID="9fbbaf825a8cfb2501a30b220334bf1a4eb09724fcc91ea6eecc0dab8e85af46" exitCode=0 Jan 30 10:26:14 crc kubenswrapper[4984]: I0130 10:26:14.864996 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerDied","Data":"9fbbaf825a8cfb2501a30b220334bf1a4eb09724fcc91ea6eecc0dab8e85af46"} Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.213315 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.412039 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") pod \"66ab9762-201b-40f3-8d9b-1d114a7d778e\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.412641 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") pod \"66ab9762-201b-40f3-8d9b-1d114a7d778e\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.413043 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") pod \"66ab9762-201b-40f3-8d9b-1d114a7d778e\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.413660 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle" (OuterVolumeSpecName: "bundle") pod "66ab9762-201b-40f3-8d9b-1d114a7d778e" (UID: "66ab9762-201b-40f3-8d9b-1d114a7d778e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.414033 4984 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.425596 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd" (OuterVolumeSpecName: "kube-api-access-2qpbd") pod "66ab9762-201b-40f3-8d9b-1d114a7d778e" (UID: "66ab9762-201b-40f3-8d9b-1d114a7d778e"). InnerVolumeSpecName "kube-api-access-2qpbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.425910 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util" (OuterVolumeSpecName: "util") pod "66ab9762-201b-40f3-8d9b-1d114a7d778e" (UID: "66ab9762-201b-40f3-8d9b-1d114a7d778e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.515356 4984 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.515402 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.890279 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerDied","Data":"8e94377a18d5da81d679783bfd83611dd6a34e5dfaf77dfc19648d278742cc3f"} Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.890351 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e94377a18d5da81d679783bfd83611dd6a34e5dfaf77dfc19648d278742cc3f" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.890369 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.050977 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69"] Jan 30 10:26:24 crc kubenswrapper[4984]: E0130 10:26:24.053001 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="util" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.053104 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="util" Jan 30 10:26:24 crc kubenswrapper[4984]: E0130 10:26:24.053187 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="extract" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.053284 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="extract" Jan 30 10:26:24 crc kubenswrapper[4984]: E0130 10:26:24.053398 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="pull" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.053510 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="pull" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.053744 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="extract" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.054383 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.056879 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-x8fxc" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.144778 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxl5p\" (UniqueName: \"kubernetes.io/projected/f4b80c7c-3e81-48d4-862c-684369655891-kube-api-access-vxl5p\") pod \"openstack-operator-controller-init-7d4ff8bbbc-68r69\" (UID: \"f4b80c7c-3e81-48d4-862c-684369655891\") " pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.163473 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69"] Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.247483 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxl5p\" (UniqueName: \"kubernetes.io/projected/f4b80c7c-3e81-48d4-862c-684369655891-kube-api-access-vxl5p\") pod \"openstack-operator-controller-init-7d4ff8bbbc-68r69\" (UID: \"f4b80c7c-3e81-48d4-862c-684369655891\") " pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.264759 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxl5p\" (UniqueName: \"kubernetes.io/projected/f4b80c7c-3e81-48d4-862c-684369655891-kube-api-access-vxl5p\") pod \"openstack-operator-controller-init-7d4ff8bbbc-68r69\" (UID: \"f4b80c7c-3e81-48d4-862c-684369655891\") " pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.370689 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.797928 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69"] Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.949497 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" event={"ID":"f4b80c7c-3e81-48d4-862c-684369655891","Type":"ContainerStarted","Data":"88eaf4544290c8cbfff907a2f75cf49f33c40a7f119651cacf639a099c1510df"} Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.056877 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.058317 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.074501 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.074564 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.075176 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.079323 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176050 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176126 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176150 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176647 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176666 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.198470 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.378064 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.741486 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:28 crc kubenswrapper[4984]: W0130 10:26:28.742751 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9468373_fc0f_4d5e_85c0_4d09686d9b9a.slice/crio-40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64 WatchSource:0}: Error finding container 40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64: Status 404 returned error can't find the container with id 40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64 Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.986167 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" event={"ID":"f4b80c7c-3e81-48d4-862c-684369655891","Type":"ContainerStarted","Data":"4b0159011825f1cd752eacb10e8b694aabb94c7c7ed94ff52e434800845798ae"} Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.986311 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.988471 4984 generic.go:334] "Generic (PLEG): container finished" podID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerID="b9c29423280037e67829c95b4f1cefcf040741990711bb2b73a1e87ab606013f" exitCode=0 Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.988507 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerDied","Data":"b9c29423280037e67829c95b4f1cefcf040741990711bb2b73a1e87ab606013f"} Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.988536 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerStarted","Data":"40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64"} Jan 30 10:26:29 crc kubenswrapper[4984]: I0130 10:26:29.043732 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" podStartSLOduration=1.197445472 podStartE2EDuration="5.043710312s" podCreationTimestamp="2026-01-30 10:26:24 +0000 UTC" firstStartedPulling="2026-01-30 10:26:24.808506863 +0000 UTC m=+889.374810697" lastFinishedPulling="2026-01-30 10:26:28.654771713 +0000 UTC m=+893.221075537" observedRunningTime="2026-01-30 10:26:29.033790691 +0000 UTC m=+893.600094515" watchObservedRunningTime="2026-01-30 10:26:29.043710312 +0000 UTC m=+893.610014136" Jan 30 10:26:30 crc kubenswrapper[4984]: I0130 10:26:29.999698 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerStarted","Data":"f534a39231d8e7ac8106fc3f539793faee62238d1a933015bf8e9a34d6ce027f"} Jan 30 10:26:31 crc kubenswrapper[4984]: I0130 10:26:31.008636 4984 generic.go:334] "Generic (PLEG): container finished" podID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerID="f534a39231d8e7ac8106fc3f539793faee62238d1a933015bf8e9a34d6ce027f" exitCode=0 Jan 30 10:26:31 crc kubenswrapper[4984]: I0130 10:26:31.008692 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerDied","Data":"f534a39231d8e7ac8106fc3f539793faee62238d1a933015bf8e9a34d6ce027f"} Jan 30 10:26:31 crc kubenswrapper[4984]: I0130 10:26:31.009032 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerStarted","Data":"88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c"} Jan 30 10:26:31 crc kubenswrapper[4984]: I0130 10:26:31.035040 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94xpf" podStartSLOduration=3.613010908 podStartE2EDuration="5.035014012s" podCreationTimestamp="2026-01-30 10:26:26 +0000 UTC" firstStartedPulling="2026-01-30 10:26:28.990499571 +0000 UTC m=+893.556803395" lastFinishedPulling="2026-01-30 10:26:30.412502645 +0000 UTC m=+894.978806499" observedRunningTime="2026-01-30 10:26:31.031082689 +0000 UTC m=+895.597386513" watchObservedRunningTime="2026-01-30 10:26:31.035014012 +0000 UTC m=+895.601317836" Jan 30 10:26:33 crc kubenswrapper[4984]: I0130 10:26:33.001150 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:26:33 crc kubenswrapper[4984]: I0130 10:26:33.001218 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:26:34 crc kubenswrapper[4984]: I0130 10:26:34.375061 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.065007 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.066600 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.128332 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.223074 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.223805 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.224017 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.324922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.325047 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.325081 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.325427 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.325478 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.346674 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.379232 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.379294 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.400546 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.426154 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.610584 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:37 crc kubenswrapper[4984]: I0130 10:26:37.051288 4984 generic.go:334] "Generic (PLEG): container finished" podID="7be10507-f755-4ddf-8a9c-4699573ac179" containerID="e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184" exitCode=0 Jan 30 10:26:37 crc kubenswrapper[4984]: I0130 10:26:37.051380 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerDied","Data":"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184"} Jan 30 10:26:37 crc kubenswrapper[4984]: I0130 10:26:37.051422 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerStarted","Data":"a1e87647fa48ca5a25ffd87302b4d82f9faa355a900d689f8b5ed3691791996c"} Jan 30 10:26:37 crc kubenswrapper[4984]: I0130 10:26:37.102481 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:38 crc kubenswrapper[4984]: I0130 10:26:38.060161 4984 generic.go:334] "Generic (PLEG): container finished" podID="7be10507-f755-4ddf-8a9c-4699573ac179" containerID="d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51" exitCode=0 Jan 30 10:26:38 crc kubenswrapper[4984]: I0130 10:26:38.060222 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerDied","Data":"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51"} Jan 30 10:26:39 crc kubenswrapper[4984]: I0130 10:26:39.070294 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerStarted","Data":"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965"} Jan 30 10:26:39 crc kubenswrapper[4984]: I0130 10:26:39.098397 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l5j2l" podStartSLOduration=1.471152838 podStartE2EDuration="3.098376134s" podCreationTimestamp="2026-01-30 10:26:36 +0000 UTC" firstStartedPulling="2026-01-30 10:26:37.05373124 +0000 UTC m=+901.620035064" lastFinishedPulling="2026-01-30 10:26:38.680954496 +0000 UTC m=+903.247258360" observedRunningTime="2026-01-30 10:26:39.097727057 +0000 UTC m=+903.664030881" watchObservedRunningTime="2026-01-30 10:26:39.098376134 +0000 UTC m=+903.664679978" Jan 30 10:26:40 crc kubenswrapper[4984]: I0130 10:26:40.050380 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:40 crc kubenswrapper[4984]: I0130 10:26:40.050646 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94xpf" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="registry-server" containerID="cri-o://88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c" gracePeriod=2 Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.092027 4984 generic.go:334] "Generic (PLEG): container finished" podID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerID="88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c" exitCode=0 Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.092916 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerDied","Data":"88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c"} Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.785486 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.920971 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") pod \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.921037 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") pod \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.921061 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") pod \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.923196 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities" (OuterVolumeSpecName: "utilities") pod "e9468373-fc0f-4d5e-85c0-4d09686d9b9a" (UID: "e9468373-fc0f-4d5e-85c0-4d09686d9b9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.930657 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27" (OuterVolumeSpecName: "kube-api-access-mtc27") pod "e9468373-fc0f-4d5e-85c0-4d09686d9b9a" (UID: "e9468373-fc0f-4d5e-85c0-4d09686d9b9a"). InnerVolumeSpecName "kube-api-access-mtc27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.972883 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9468373-fc0f-4d5e-85c0-4d09686d9b9a" (UID: "e9468373-fc0f-4d5e-85c0-4d09686d9b9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.024724 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.024793 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.024820 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.106481 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerDied","Data":"40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64"} Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.108874 4984 scope.go:117] "RemoveContainer" containerID="88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.106540 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.146376 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.152493 4984 scope.go:117] "RemoveContainer" containerID="f534a39231d8e7ac8106fc3f539793faee62238d1a933015bf8e9a34d6ce027f" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.160200 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.176834 4984 scope.go:117] "RemoveContainer" containerID="b9c29423280037e67829c95b4f1cefcf040741990711bb2b73a1e87ab606013f" Jan 30 10:26:44 crc kubenswrapper[4984]: I0130 10:26:44.104444 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" path="/var/lib/kubelet/pods/e9468373-fc0f-4d5e-85c0-4d09686d9b9a/volumes" Jan 30 10:26:46 crc kubenswrapper[4984]: I0130 10:26:46.401145 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:46 crc kubenswrapper[4984]: I0130 10:26:46.401645 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:46 crc kubenswrapper[4984]: I0130 10:26:46.463859 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:47 crc kubenswrapper[4984]: I0130 10:26:47.216094 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:48 crc kubenswrapper[4984]: I0130 10:26:48.254221 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.160462 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l5j2l" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="registry-server" containerID="cri-o://8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" gracePeriod=2 Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.553863 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.740654 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") pod \"7be10507-f755-4ddf-8a9c-4699573ac179\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.740732 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") pod \"7be10507-f755-4ddf-8a9c-4699573ac179\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.740845 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") pod \"7be10507-f755-4ddf-8a9c-4699573ac179\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.742679 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities" (OuterVolumeSpecName: "utilities") pod "7be10507-f755-4ddf-8a9c-4699573ac179" (UID: "7be10507-f755-4ddf-8a9c-4699573ac179"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.750611 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd" (OuterVolumeSpecName: "kube-api-access-vzfnd") pod "7be10507-f755-4ddf-8a9c-4699573ac179" (UID: "7be10507-f755-4ddf-8a9c-4699573ac179"). InnerVolumeSpecName "kube-api-access-vzfnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.764584 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7be10507-f755-4ddf-8a9c-4699573ac179" (UID: "7be10507-f755-4ddf-8a9c-4699573ac179"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.842688 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.842766 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.842797 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.173887 4984 generic.go:334] "Generic (PLEG): container finished" podID="7be10507-f755-4ddf-8a9c-4699573ac179" containerID="8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" exitCode=0 Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.173971 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerDied","Data":"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965"} Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.173983 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.174005 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerDied","Data":"a1e87647fa48ca5a25ffd87302b4d82f9faa355a900d689f8b5ed3691791996c"} Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.174055 4984 scope.go:117] "RemoveContainer" containerID="8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.204573 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.211104 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.222042 4984 scope.go:117] "RemoveContainer" containerID="d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.244531 4984 scope.go:117] "RemoveContainer" containerID="e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.270856 4984 scope.go:117] "RemoveContainer" containerID="8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" Jan 30 10:26:50 crc kubenswrapper[4984]: E0130 10:26:50.272109 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965\": container with ID starting with 8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965 not found: ID does not exist" containerID="8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.272157 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965"} err="failed to get container status \"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965\": rpc error: code = NotFound desc = could not find container \"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965\": container with ID starting with 8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965 not found: ID does not exist" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.272184 4984 scope.go:117] "RemoveContainer" containerID="d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51" Jan 30 10:26:50 crc kubenswrapper[4984]: E0130 10:26:50.273078 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51\": container with ID starting with d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51 not found: ID does not exist" containerID="d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.273127 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51"} err="failed to get container status \"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51\": rpc error: code = NotFound desc = could not find container \"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51\": container with ID starting with d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51 not found: ID does not exist" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.273158 4984 scope.go:117] "RemoveContainer" containerID="e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184" Jan 30 10:26:50 crc kubenswrapper[4984]: E0130 10:26:50.273708 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184\": container with ID starting with e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184 not found: ID does not exist" containerID="e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.273839 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184"} err="failed to get container status \"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184\": rpc error: code = NotFound desc = could not find container \"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184\": container with ID starting with e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184 not found: ID does not exist" Jan 30 10:26:52 crc kubenswrapper[4984]: I0130 10:26:52.096946 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" path="/var/lib/kubelet/pods/7be10507-f755-4ddf-8a9c-4699573ac179/volumes" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.366373 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367111 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367127 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367142 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367150 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367168 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="extract-content" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367177 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="extract-content" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367188 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="extract-content" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367195 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="extract-content" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367205 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="extract-utilities" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367213 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="extract-utilities" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367227 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="extract-utilities" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367235 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="extract-utilities" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367371 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367389 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.375746 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.383798 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.498135 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.498931 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.498979 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.600485 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.600595 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.600626 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.601039 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.601135 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.621692 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.695368 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:59 crc kubenswrapper[4984]: I0130 10:26:59.161515 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:26:59 crc kubenswrapper[4984]: I0130 10:26:59.240782 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerStarted","Data":"563a7e524d30c4803c51ba214f5d762c3f2083539dc1784f4a771db1f2668c0c"} Jan 30 10:27:00 crc kubenswrapper[4984]: I0130 10:27:00.249593 4984 generic.go:334] "Generic (PLEG): container finished" podID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerID="b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d" exitCode=0 Jan 30 10:27:00 crc kubenswrapper[4984]: I0130 10:27:00.251128 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerDied","Data":"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d"} Jan 30 10:27:01 crc kubenswrapper[4984]: I0130 10:27:01.259630 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerStarted","Data":"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371"} Jan 30 10:27:02 crc kubenswrapper[4984]: I0130 10:27:02.271641 4984 generic.go:334] "Generic (PLEG): container finished" podID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerID="dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371" exitCode=0 Jan 30 10:27:02 crc kubenswrapper[4984]: I0130 10:27:02.271723 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerDied","Data":"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371"} Jan 30 10:27:03 crc kubenswrapper[4984]: I0130 10:27:03.000475 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:27:03 crc kubenswrapper[4984]: I0130 10:27:03.000877 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:27:03 crc kubenswrapper[4984]: I0130 10:27:03.280032 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerStarted","Data":"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596"} Jan 30 10:27:03 crc kubenswrapper[4984]: I0130 10:27:03.308926 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7g2fz" podStartSLOduration=2.835687994 podStartE2EDuration="5.30891374s" podCreationTimestamp="2026-01-30 10:26:58 +0000 UTC" firstStartedPulling="2026-01-30 10:27:00.253698344 +0000 UTC m=+924.820002168" lastFinishedPulling="2026-01-30 10:27:02.72692409 +0000 UTC m=+927.293227914" observedRunningTime="2026-01-30 10:27:03.307306838 +0000 UTC m=+927.873610662" watchObservedRunningTime="2026-01-30 10:27:03.30891374 +0000 UTC m=+927.875217564" Jan 30 10:27:08 crc kubenswrapper[4984]: I0130 10:27:08.696341 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:08 crc kubenswrapper[4984]: I0130 10:27:08.696894 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:08 crc kubenswrapper[4984]: I0130 10:27:08.737503 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:09 crc kubenswrapper[4984]: I0130 10:27:09.366307 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:09 crc kubenswrapper[4984]: I0130 10:27:09.410336 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.334854 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7g2fz" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="registry-server" containerID="cri-o://20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" gracePeriod=2 Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.743651 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.796651 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") pod \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.796971 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") pod \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.797096 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") pod \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.797701 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities" (OuterVolumeSpecName: "utilities") pod "3a5c8b58-3853-49c9-8d03-c6dd4528b75c" (UID: "3a5c8b58-3853-49c9-8d03-c6dd4528b75c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.805628 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b" (OuterVolumeSpecName: "kube-api-access-szv5b") pod "3a5c8b58-3853-49c9-8d03-c6dd4528b75c" (UID: "3a5c8b58-3853-49c9-8d03-c6dd4528b75c"). InnerVolumeSpecName "kube-api-access-szv5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.898561 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.898592 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") on node \"crc\" DevicePath \"\"" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.162044 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a5c8b58-3853-49c9-8d03-c6dd4528b75c" (UID: "3a5c8b58-3853-49c9-8d03-c6dd4528b75c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.201390 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362336 4984 generic.go:334] "Generic (PLEG): container finished" podID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerID="20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" exitCode=0 Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362394 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerDied","Data":"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596"} Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362434 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerDied","Data":"563a7e524d30c4803c51ba214f5d762c3f2083539dc1784f4a771db1f2668c0c"} Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362457 4984 scope.go:117] "RemoveContainer" containerID="20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362596 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.390161 4984 scope.go:117] "RemoveContainer" containerID="dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.403321 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.407398 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.411475 4984 scope.go:117] "RemoveContainer" containerID="b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.424778 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj"] Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.425042 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="extract-utilities" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425059 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="extract-utilities" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.425079 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="registry-server" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425087 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="registry-server" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.425098 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="extract-content" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425104 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="extract-content" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425229 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="registry-server" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425832 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.429037 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k72mx" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.437790 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.442771 4984 scope.go:117] "RemoveContainer" containerID="20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.444730 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596\": container with ID starting with 20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596 not found: ID does not exist" containerID="20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.444803 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596"} err="failed to get container status \"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596\": rpc error: code = NotFound desc = could not find container \"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596\": container with ID starting with 20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596 not found: ID does not exist" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.444846 4984 scope.go:117] "RemoveContainer" containerID="dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.445546 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371\": container with ID starting with dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371 not found: ID does not exist" containerID="dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.445582 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371"} err="failed to get container status \"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371\": rpc error: code = NotFound desc = could not find container \"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371\": container with ID starting with dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371 not found: ID does not exist" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.445604 4984 scope.go:117] "RemoveContainer" containerID="b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.446150 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d\": container with ID starting with b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d not found: ID does not exist" containerID="b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.446206 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d"} err="failed to get container status \"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d\": rpc error: code = NotFound desc = could not find container \"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d\": container with ID starting with b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d not found: ID does not exist" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.459088 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.462620 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.469307 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nn86j" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.473337 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.474956 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.477788 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-g5sl5" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.480767 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.481584 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.485721 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lr58g" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.493770 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.500607 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.508048 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phl4z\" (UniqueName: \"kubernetes.io/projected/254d2d7e-3636-429d-b043-501d76db73e9-kube-api-access-phl4z\") pod \"glance-operator-controller-manager-8886f4c47-tjfpn\" (UID: \"254d2d7e-3636-429d-b043-501d76db73e9\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.508102 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pj6f\" (UniqueName: \"kubernetes.io/projected/5d977367-099f-4a10-bf37-9e9cd913932e-kube-api-access-2pj6f\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-sxpfj\" (UID: \"5d977367-099f-4a10-bf37-9e9cd913932e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.508178 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqnxq\" (UniqueName: \"kubernetes.io/projected/8c70fc0b-a348-4dcd-8fc3-9afa1c22318e-kube-api-access-hqnxq\") pod \"designate-operator-controller-manager-6d9697b7f4-b674n\" (UID: \"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.508197 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92k7t\" (UniqueName: \"kubernetes.io/projected/74bafe89-dc08-4029-823c-f0c3579b8d6b-kube-api-access-92k7t\") pod \"cinder-operator-controller-manager-8d874c8fc-cnxbk\" (UID: \"74bafe89-dc08-4029-823c-f0c3579b8d6b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.521780 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.527803 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.528766 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.532017 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-77kll" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.547527 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.551522 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.552374 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.561487 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t5j55"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.562210 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.565632 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-j2m86" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.565843 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.566475 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-thdrj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.570022 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.571051 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.574048 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.580711 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kjwsf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.584072 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t5j55"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.602535 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617687 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cl5j\" (UniqueName: \"kubernetes.io/projected/5e7c3856-3562-4cb4-b131-48302c43ce25-kube-api-access-5cl5j\") pod \"heat-operator-controller-manager-69d6db494d-zl2fj\" (UID: \"5e7c3856-3562-4cb4-b131-48302c43ce25\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617743 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqnxq\" (UniqueName: \"kubernetes.io/projected/8c70fc0b-a348-4dcd-8fc3-9afa1c22318e-kube-api-access-hqnxq\") pod \"designate-operator-controller-manager-6d9697b7f4-b674n\" (UID: \"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617776 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92k7t\" (UniqueName: \"kubernetes.io/projected/74bafe89-dc08-4029-823c-f0c3579b8d6b-kube-api-access-92k7t\") pod \"cinder-operator-controller-manager-8d874c8fc-cnxbk\" (UID: \"74bafe89-dc08-4029-823c-f0c3579b8d6b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617804 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617827 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xzfx\" (UniqueName: \"kubernetes.io/projected/3899fe05-64bb-48b9-88dc-2341ad9bc00b-kube-api-access-9xzfx\") pod \"ironic-operator-controller-manager-5f4b8bd54d-8hrrf\" (UID: \"3899fe05-64bb-48b9-88dc-2341ad9bc00b\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617852 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phl4z\" (UniqueName: \"kubernetes.io/projected/254d2d7e-3636-429d-b043-501d76db73e9-kube-api-access-phl4z\") pod \"glance-operator-controller-manager-8886f4c47-tjfpn\" (UID: \"254d2d7e-3636-429d-b043-501d76db73e9\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617878 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pj6f\" (UniqueName: \"kubernetes.io/projected/5d977367-099f-4a10-bf37-9e9cd913932e-kube-api-access-2pj6f\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-sxpfj\" (UID: \"5d977367-099f-4a10-bf37-9e9cd913932e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617908 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4rrq\" (UniqueName: \"kubernetes.io/projected/e420c57f-7248-4454-926f-48766e48236c-kube-api-access-b4rrq\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617956 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whhkr\" (UniqueName: \"kubernetes.io/projected/7a6dd1f5-d0b6-49a6-9270-dd98f2147932-kube-api-access-whhkr\") pod \"horizon-operator-controller-manager-5fb775575f-zzd6d\" (UID: \"7a6dd1f5-d0b6-49a6-9270-dd98f2147932\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.664989 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.667146 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.669907 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqnxq\" (UniqueName: \"kubernetes.io/projected/8c70fc0b-a348-4dcd-8fc3-9afa1c22318e-kube-api-access-hqnxq\") pod \"designate-operator-controller-manager-6d9697b7f4-b674n\" (UID: \"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.676282 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pj6f\" (UniqueName: \"kubernetes.io/projected/5d977367-099f-4a10-bf37-9e9cd913932e-kube-api-access-2pj6f\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-sxpfj\" (UID: \"5d977367-099f-4a10-bf37-9e9cd913932e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.678437 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rtfnj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.689660 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phl4z\" (UniqueName: \"kubernetes.io/projected/254d2d7e-3636-429d-b043-501d76db73e9-kube-api-access-phl4z\") pod \"glance-operator-controller-manager-8886f4c47-tjfpn\" (UID: \"254d2d7e-3636-429d-b043-501d76db73e9\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.711960 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92k7t\" (UniqueName: \"kubernetes.io/projected/74bafe89-dc08-4029-823c-f0c3579b8d6b-kube-api-access-92k7t\") pod \"cinder-operator-controller-manager-8d874c8fc-cnxbk\" (UID: \"74bafe89-dc08-4029-823c-f0c3579b8d6b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.715954 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.716973 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.721044 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mv96k" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725388 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whhkr\" (UniqueName: \"kubernetes.io/projected/7a6dd1f5-d0b6-49a6-9270-dd98f2147932-kube-api-access-whhkr\") pod \"horizon-operator-controller-manager-5fb775575f-zzd6d\" (UID: \"7a6dd1f5-d0b6-49a6-9270-dd98f2147932\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725472 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cl5j\" (UniqueName: \"kubernetes.io/projected/5e7c3856-3562-4cb4-b131-48302c43ce25-kube-api-access-5cl5j\") pod \"heat-operator-controller-manager-69d6db494d-zl2fj\" (UID: \"5e7c3856-3562-4cb4-b131-48302c43ce25\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725544 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xzfx\" (UniqueName: \"kubernetes.io/projected/3899fe05-64bb-48b9-88dc-2341ad9bc00b-kube-api-access-9xzfx\") pod \"ironic-operator-controller-manager-5f4b8bd54d-8hrrf\" (UID: \"3899fe05-64bb-48b9-88dc-2341ad9bc00b\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725586 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9nc6\" (UniqueName: \"kubernetes.io/projected/dd895dbf-b809-498c-95fd-dfd09a9eeb4d-kube-api-access-b9nc6\") pod \"keystone-operator-controller-manager-84f48565d4-zwc2t\" (UID: \"dd895dbf-b809-498c-95fd-dfd09a9eeb4d\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725613 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4rrq\" (UniqueName: \"kubernetes.io/projected/e420c57f-7248-4454-926f-48766e48236c-kube-api-access-b4rrq\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.726500 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.726559 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:13.226537813 +0000 UTC m=+937.792841637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.732002 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.744772 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.747241 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cl5j\" (UniqueName: \"kubernetes.io/projected/5e7c3856-3562-4cb4-b131-48302c43ce25-kube-api-access-5cl5j\") pod \"heat-operator-controller-manager-69d6db494d-zl2fj\" (UID: \"5e7c3856-3562-4cb4-b131-48302c43ce25\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.748796 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.753489 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whhkr\" (UniqueName: \"kubernetes.io/projected/7a6dd1f5-d0b6-49a6-9270-dd98f2147932-kube-api-access-whhkr\") pod \"horizon-operator-controller-manager-5fb775575f-zzd6d\" (UID: \"7a6dd1f5-d0b6-49a6-9270-dd98f2147932\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.762970 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4rrq\" (UniqueName: \"kubernetes.io/projected/e420c57f-7248-4454-926f-48766e48236c-kube-api-access-b4rrq\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.773768 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.774464 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.774809 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.776231 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-x6pjc" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.777852 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-srv7f" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.784222 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xzfx\" (UniqueName: \"kubernetes.io/projected/3899fe05-64bb-48b9-88dc-2341ad9bc00b-kube-api-access-9xzfx\") pod \"ironic-operator-controller-manager-5f4b8bd54d-8hrrf\" (UID: \"3899fe05-64bb-48b9-88dc-2341ad9bc00b\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.784815 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.813599 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832133 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832906 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfhqb\" (UniqueName: \"kubernetes.io/projected/1d30b9a6-fe73-4e32-9095-65b1950f7afe-kube-api-access-gfhqb\") pod \"neutron-operator-controller-manager-585dbc889-2tbcn\" (UID: \"1d30b9a6-fe73-4e32-9095-65b1950f7afe\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832936 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nc6\" (UniqueName: \"kubernetes.io/projected/dd895dbf-b809-498c-95fd-dfd09a9eeb4d-kube-api-access-b9nc6\") pod \"keystone-operator-controller-manager-84f48565d4-zwc2t\" (UID: \"dd895dbf-b809-498c-95fd-dfd09a9eeb4d\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832972 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bh5d\" (UniqueName: \"kubernetes.io/projected/67a8ae49-7f19-47bc-8e54-0873c535f6ff-kube-api-access-8bh5d\") pod \"mariadb-operator-controller-manager-67bf948998-t75dn\" (UID: \"67a8ae49-7f19-47bc-8e54-0873c535f6ff\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832997 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62n78\" (UniqueName: \"kubernetes.io/projected/739ed1d4-c090-4166-9352-d048e0b281d6-kube-api-access-62n78\") pod \"manila-operator-controller-manager-7dd968899f-2wvrh\" (UID: \"739ed1d4-c090-4166-9352-d048e0b281d6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.856933 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.858033 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nc6\" (UniqueName: \"kubernetes.io/projected/dd895dbf-b809-498c-95fd-dfd09a9eeb4d-kube-api-access-b9nc6\") pod \"keystone-operator-controller-manager-84f48565d4-zwc2t\" (UID: \"dd895dbf-b809-498c-95fd-dfd09a9eeb4d\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.849945 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.872390 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.888862 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.909520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.919076 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.919562 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.924179 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.932293 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mzqbr" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.934188 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfhqb\" (UniqueName: \"kubernetes.io/projected/1d30b9a6-fe73-4e32-9095-65b1950f7afe-kube-api-access-gfhqb\") pod \"neutron-operator-controller-manager-585dbc889-2tbcn\" (UID: \"1d30b9a6-fe73-4e32-9095-65b1950f7afe\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.934257 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bh5d\" (UniqueName: \"kubernetes.io/projected/67a8ae49-7f19-47bc-8e54-0873c535f6ff-kube-api-access-8bh5d\") pod \"mariadb-operator-controller-manager-67bf948998-t75dn\" (UID: \"67a8ae49-7f19-47bc-8e54-0873c535f6ff\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.934293 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62n78\" (UniqueName: \"kubernetes.io/projected/739ed1d4-c090-4166-9352-d048e0b281d6-kube-api-access-62n78\") pod \"manila-operator-controller-manager-7dd968899f-2wvrh\" (UID: \"739ed1d4-c090-4166-9352-d048e0b281d6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.954719 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bh5d\" (UniqueName: \"kubernetes.io/projected/67a8ae49-7f19-47bc-8e54-0873c535f6ff-kube-api-access-8bh5d\") pod \"mariadb-operator-controller-manager-67bf948998-t75dn\" (UID: \"67a8ae49-7f19-47bc-8e54-0873c535f6ff\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.956055 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62n78\" (UniqueName: \"kubernetes.io/projected/739ed1d4-c090-4166-9352-d048e0b281d6-kube-api-access-62n78\") pod \"manila-operator-controller-manager-7dd968899f-2wvrh\" (UID: \"739ed1d4-c090-4166-9352-d048e0b281d6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.957515 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.960025 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfhqb\" (UniqueName: \"kubernetes.io/projected/1d30b9a6-fe73-4e32-9095-65b1950f7afe-kube-api-access-gfhqb\") pod \"neutron-operator-controller-manager-585dbc889-2tbcn\" (UID: \"1d30b9a6-fe73-4e32-9095-65b1950f7afe\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.965280 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.966856 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.971809 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-5b5d7" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.003525 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.016054 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.017807 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.023929 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-g5pdg" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.034369 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.034987 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpmtm\" (UniqueName: \"kubernetes.io/projected/c6ee91ae-9b91-46a7-ad2a-c67133a4f40e-kube-api-access-zpmtm\") pod \"octavia-operator-controller-manager-6687f8d877-sh7cp\" (UID: \"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.035039 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnglj\" (UniqueName: \"kubernetes.io/projected/ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1-kube-api-access-wnglj\") pod \"nova-operator-controller-manager-55bff696bd-gcbx5\" (UID: \"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.035059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsr5s\" (UniqueName: \"kubernetes.io/projected/bb50c219-6036-48d0-8568-0a1601150272-kube-api-access-tsr5s\") pod \"ovn-operator-controller-manager-788c46999f-28kkh\" (UID: \"bb50c219-6036-48d0-8568-0a1601150272\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.075060 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.076358 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.083758 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.084237 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-hd2cj" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.084746 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.087742 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.088064 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rq8h4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.089061 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.090310 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.091696 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jrdtd" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.097948 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.116699 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.126668 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139296 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsr5s\" (UniqueName: \"kubernetes.io/projected/bb50c219-6036-48d0-8568-0a1601150272-kube-api-access-tsr5s\") pod \"ovn-operator-controller-manager-788c46999f-28kkh\" (UID: \"bb50c219-6036-48d0-8568-0a1601150272\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139396 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xl2h\" (UniqueName: \"kubernetes.io/projected/8d22f0a7-a541-405b-8146-fb098d02ddcc-kube-api-access-6xl2h\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139429 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjvn\" (UniqueName: \"kubernetes.io/projected/c3eec896-3441-4b0e-a7e5-4bde717dbccd-kube-api-access-7fjvn\") pod \"swift-operator-controller-manager-68fc8c869-jvcvp\" (UID: \"c3eec896-3441-4b0e-a7e5-4bde717dbccd\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139484 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvzn\" (UniqueName: \"kubernetes.io/projected/69e058b7-deda-4eb8-9cac-6bc08032b3bf-kube-api-access-xpvzn\") pod \"placement-operator-controller-manager-5b964cf4cd-fx6t9\" (UID: \"69e058b7-deda-4eb8-9cac-6bc08032b3bf\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139523 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139602 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpmtm\" (UniqueName: \"kubernetes.io/projected/c6ee91ae-9b91-46a7-ad2a-c67133a4f40e-kube-api-access-zpmtm\") pod \"octavia-operator-controller-manager-6687f8d877-sh7cp\" (UID: \"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139677 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnglj\" (UniqueName: \"kubernetes.io/projected/ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1-kube-api-access-wnglj\") pod \"nova-operator-controller-manager-55bff696bd-gcbx5\" (UID: \"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.144197 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.145851 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.171478 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpmtm\" (UniqueName: \"kubernetes.io/projected/c6ee91ae-9b91-46a7-ad2a-c67133a4f40e-kube-api-access-zpmtm\") pod \"octavia-operator-controller-manager-6687f8d877-sh7cp\" (UID: \"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.174866 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnglj\" (UniqueName: \"kubernetes.io/projected/ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1-kube-api-access-wnglj\") pod \"nova-operator-controller-manager-55bff696bd-gcbx5\" (UID: \"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.180844 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsr5s\" (UniqueName: \"kubernetes.io/projected/bb50c219-6036-48d0-8568-0a1601150272-kube-api-access-tsr5s\") pod \"ovn-operator-controller-manager-788c46999f-28kkh\" (UID: \"bb50c219-6036-48d0-8568-0a1601150272\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.188880 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.189347 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.204497 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.205631 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.209117 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jnlqs" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.215030 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.218923 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.227209 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.227922 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.242970 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdkmp\" (UniqueName: \"kubernetes.io/projected/df5d4f32-b49b-46ea-8aac-a3b76b2f8f00-kube-api-access-qdkmp\") pod \"telemetry-operator-controller-manager-64b5b76f97-r7hs4\" (UID: \"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.243040 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.246316 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xl2h\" (UniqueName: \"kubernetes.io/projected/8d22f0a7-a541-405b-8146-fb098d02ddcc-kube-api-access-6xl2h\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.246366 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fjvn\" (UniqueName: \"kubernetes.io/projected/c3eec896-3441-4b0e-a7e5-4bde717dbccd-kube-api-access-7fjvn\") pod \"swift-operator-controller-manager-68fc8c869-jvcvp\" (UID: \"c3eec896-3441-4b0e-a7e5-4bde717dbccd\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.246399 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpvzn\" (UniqueName: \"kubernetes.io/projected/69e058b7-deda-4eb8-9cac-6bc08032b3bf-kube-api-access-xpvzn\") pod \"placement-operator-controller-manager-5b964cf4cd-fx6t9\" (UID: \"69e058b7-deda-4eb8-9cac-6bc08032b3bf\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.246462 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.249617 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-h7pcb"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.250553 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.252206 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.252325 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:13.752297714 +0000 UTC m=+938.318601538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.253543 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.253661 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.253724 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:14.253700311 +0000 UTC m=+938.820004135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.256731 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-j4mt6" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.262466 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rdflf" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.280633 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-h7pcb"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.291934 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpvzn\" (UniqueName: \"kubernetes.io/projected/69e058b7-deda-4eb8-9cac-6bc08032b3bf-kube-api-access-xpvzn\") pod \"placement-operator-controller-manager-5b964cf4cd-fx6t9\" (UID: \"69e058b7-deda-4eb8-9cac-6bc08032b3bf\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.291968 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.322626 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fjvn\" (UniqueName: \"kubernetes.io/projected/c3eec896-3441-4b0e-a7e5-4bde717dbccd-kube-api-access-7fjvn\") pod \"swift-operator-controller-manager-68fc8c869-jvcvp\" (UID: \"c3eec896-3441-4b0e-a7e5-4bde717dbccd\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.339371 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xl2h\" (UniqueName: \"kubernetes.io/projected/8d22f0a7-a541-405b-8146-fb098d02ddcc-kube-api-access-6xl2h\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.357270 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klzn2\" (UniqueName: \"kubernetes.io/projected/9a53674a-07ad-4bfc-80c8-f55bcc286eb0-kube-api-access-klzn2\") pod \"watcher-operator-controller-manager-564965969-h7pcb\" (UID: \"9a53674a-07ad-4bfc-80c8-f55bcc286eb0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.357356 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdkmp\" (UniqueName: \"kubernetes.io/projected/df5d4f32-b49b-46ea-8aac-a3b76b2f8f00-kube-api-access-qdkmp\") pod \"telemetry-operator-controller-manager-64b5b76f97-r7hs4\" (UID: \"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.357490 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrb2l\" (UniqueName: \"kubernetes.io/projected/350834d1-9352-4ca5-9c8a-acf60193ebc8-kube-api-access-xrb2l\") pod \"test-operator-controller-manager-56f8bfcd9f-4lz58\" (UID: \"350834d1-9352-4ca5-9c8a-acf60193ebc8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.357657 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.390431 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.391123 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdkmp\" (UniqueName: \"kubernetes.io/projected/df5d4f32-b49b-46ea-8aac-a3b76b2f8f00-kube-api-access-qdkmp\") pod \"telemetry-operator-controller-manager-64b5b76f97-r7hs4\" (UID: \"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.391652 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.394196 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.398992 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.401001 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mg6lk" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.410355 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.421534 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.448347 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.459961 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrb2l\" (UniqueName: \"kubernetes.io/projected/350834d1-9352-4ca5-9c8a-acf60193ebc8-kube-api-access-xrb2l\") pod \"test-operator-controller-manager-56f8bfcd9f-4lz58\" (UID: \"350834d1-9352-4ca5-9c8a-acf60193ebc8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.460040 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9849\" (UniqueName: \"kubernetes.io/projected/87613c07-d864-4440-b31c-03c4bb3f8ce0-kube-api-access-v9849\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.460103 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.460169 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klzn2\" (UniqueName: \"kubernetes.io/projected/9a53674a-07ad-4bfc-80c8-f55bcc286eb0-kube-api-access-klzn2\") pod \"watcher-operator-controller-manager-564965969-h7pcb\" (UID: \"9a53674a-07ad-4bfc-80c8-f55bcc286eb0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.460205 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.465128 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.466211 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.469467 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mll9d" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.471517 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.499501 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.500603 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrb2l\" (UniqueName: \"kubernetes.io/projected/350834d1-9352-4ca5-9c8a-acf60193ebc8-kube-api-access-xrb2l\") pod \"test-operator-controller-manager-56f8bfcd9f-4lz58\" (UID: \"350834d1-9352-4ca5-9c8a-acf60193ebc8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.501182 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klzn2\" (UniqueName: \"kubernetes.io/projected/9a53674a-07ad-4bfc-80c8-f55bcc286eb0-kube-api-access-klzn2\") pod \"watcher-operator-controller-manager-564965969-h7pcb\" (UID: \"9a53674a-07ad-4bfc-80c8-f55bcc286eb0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.509138 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.527998 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.581666 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9849\" (UniqueName: \"kubernetes.io/projected/87613c07-d864-4440-b31c-03c4bb3f8ce0-kube-api-access-v9849\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.581723 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.581781 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.581807 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdbfm\" (UniqueName: \"kubernetes.io/projected/e8bf6651-ff58-478c-be28-39732dac675b-kube-api-access-hdbfm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vpt86\" (UID: \"e8bf6651-ff58-478c-be28-39732dac675b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.582177 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.582700 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.582776 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:14.082756753 +0000 UTC m=+938.649060567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.584932 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.584974 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:14.084965061 +0000 UTC m=+938.651268885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.606508 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.612694 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.619405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9849\" (UniqueName: \"kubernetes.io/projected/87613c07-d864-4440-b31c-03c4bb3f8ce0-kube-api-access-v9849\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.626073 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.654509 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6dd1f5_d0b6_49a6_9270_dd98f2147932.slice/crio-afb6615c30e4a88eb2fe81d75326a416d399ffddf8946b4f560a511336817ccd WatchSource:0}: Error finding container afb6615c30e4a88eb2fe81d75326a416d399ffddf8946b4f560a511336817ccd: Status 404 returned error can't find the container with id afb6615c30e4a88eb2fe81d75326a416d399ffddf8946b4f560a511336817ccd Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.654803 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.672491 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod254d2d7e_3636_429d_b043_501d76db73e9.slice/crio-4ea354640a1254f7c8966a71053d42d6e7a9a595a2499d4fdea9100d0c8ab285 WatchSource:0}: Error finding container 4ea354640a1254f7c8966a71053d42d6e7a9a595a2499d4fdea9100d0c8ab285: Status 404 returned error can't find the container with id 4ea354640a1254f7c8966a71053d42d6e7a9a595a2499d4fdea9100d0c8ab285 Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.693419 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdbfm\" (UniqueName: \"kubernetes.io/projected/e8bf6651-ff58-478c-be28-39732dac675b-kube-api-access-hdbfm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vpt86\" (UID: \"e8bf6651-ff58-478c-be28-39732dac675b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.716977 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdbfm\" (UniqueName: \"kubernetes.io/projected/e8bf6651-ff58-478c-be28-39732dac675b-kube-api-access-hdbfm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vpt86\" (UID: \"e8bf6651-ff58-478c-be28-39732dac675b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.774784 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.792612 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e7c3856_3562_4cb4_b131_48302c43ce25.slice/crio-de7187d29402eb745534cad84336c5ef26b946b1ab9a9d5689042dc79fd8fe54 WatchSource:0}: Error finding container de7187d29402eb745534cad84336c5ef26b946b1ab9a9d5689042dc79fd8fe54: Status 404 returned error can't find the container with id de7187d29402eb745534cad84336c5ef26b946b1ab9a9d5689042dc79fd8fe54 Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.794743 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.795822 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.795875 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:14.795860633 +0000 UTC m=+939.362164457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.926574 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.927336 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3899fe05_64bb_48b9_88dc_2341ad9bc00b.slice/crio-5b7b149a805a7b765650e9a705bbd0681806f910051df6e283acfbb0fc0f2268 WatchSource:0}: Error finding container 5b7b149a805a7b765650e9a705bbd0681806f910051df6e283acfbb0fc0f2268: Status 404 returned error can't find the container with id 5b7b149a805a7b765650e9a705bbd0681806f910051df6e283acfbb0fc0f2268 Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.932154 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.941182 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd895dbf_b809_498c_95fd_dfd09a9eeb4d.slice/crio-0134ef08ebdc6ac48a0422f44b54e090e8f450fd82a29f23e9d1a2f6dcaca37e WatchSource:0}: Error finding container 0134ef08ebdc6ac48a0422f44b54e090e8f450fd82a29f23e9d1a2f6dcaca37e: Status 404 returned error can't find the container with id 0134ef08ebdc6ac48a0422f44b54e090e8f450fd82a29f23e9d1a2f6dcaca37e Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.942595 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.945731 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739ed1d4_c090_4166_9352_d048e0b281d6.slice/crio-6f4ec9e30594b28e56a43fdf97b04497856213fa8e268d40161d49bd4efc84c5 WatchSource:0}: Error finding container 6f4ec9e30594b28e56a43fdf97b04497856213fa8e268d40161d49bd4efc84c5: Status 404 returned error can't find the container with id 6f4ec9e30594b28e56a43fdf97b04497856213fa8e268d40161d49bd4efc84c5 Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.980382 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.067539 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.079296 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.100367 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.100481 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.100640 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.100642 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.100705 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:15.100685757 +0000 UTC m=+939.666989571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.100725 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:15.100716738 +0000 UTC m=+939.667020562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.103712 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" path="/var/lib/kubelet/pods/3a5c8b58-3853-49c9-8d03-c6dd4528b75c/volumes" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.154335 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9"] Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.162841 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69e058b7_deda_4eb8_9cac_6bc08032b3bf.slice/crio-1ae82204a0c2274c0659af2e5428267d371f1879458bba6a0edf0f25a00af431 WatchSource:0}: Error finding container 1ae82204a0c2274c0659af2e5428267d371f1879458bba6a0edf0f25a00af431: Status 404 returned error can't find the container with id 1ae82204a0c2274c0659af2e5428267d371f1879458bba6a0edf0f25a00af431 Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.170507 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab1e50a1_4d8f_45f4_8fa0_fd4732dce6f1.slice/crio-1bf51e6ce804078573b612626123bf81dbd82c08fac6d6794dd04ba2424c2269 WatchSource:0}: Error finding container 1bf51e6ce804078573b612626123bf81dbd82c08fac6d6794dd04ba2424c2269: Status 404 returned error can't find the container with id 1bf51e6ce804078573b612626123bf81dbd82c08fac6d6794dd04ba2424c2269 Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.171808 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5"] Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.172064 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdkmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-r7hs4_openstack-operators(df5d4f32-b49b-46ea-8aac-a3b76b2f8f00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.173462 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.178293 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wnglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-gcbx5_openstack-operators(ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.179591 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" podUID="ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.194536 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.207740 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.259485 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh"] Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.269734 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb50c219_6036_48d0_8568_0a1601150272.slice/crio-7da0bb9eeb3d855bd92746b9539116f8779ecdba706b903a2676618cf4c68f26 WatchSource:0}: Error finding container 7da0bb9eeb3d855bd92746b9539116f8779ecdba706b903a2676618cf4c68f26: Status 404 returned error can't find the container with id 7da0bb9eeb3d855bd92746b9539116f8779ecdba706b903a2676618cf4c68f26 Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.274915 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tsr5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-28kkh_openstack-operators(bb50c219-6036-48d0-8568-0a1601150272): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.276383 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.276471 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp"] Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.287144 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fjvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-jvcvp_openstack-operators(c3eec896-3441-4b0e-a7e5-4bde717dbccd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.288384 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.303645 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.303943 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.304076 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:16.30404127 +0000 UTC m=+940.870345104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.332223 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-h7pcb"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.349228 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58"] Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.355153 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod350834d1_9352_4ca5_9c8a_acf60193ebc8.slice/crio-f18d7fdd58418ea7305af9f739af224561ccf2bd09247130a57c6c12ae209e46 WatchSource:0}: Error finding container f18d7fdd58418ea7305af9f739af224561ccf2bd09247130a57c6c12ae209e46: Status 404 returned error can't find the container with id f18d7fdd58418ea7305af9f739af224561ccf2bd09247130a57c6c12ae209e46 Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.358061 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrb2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-4lz58_openstack-operators(350834d1-9352-4ca5-9c8a-acf60193ebc8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.359608 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.425215 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" event={"ID":"3899fe05-64bb-48b9-88dc-2341ad9bc00b","Type":"ContainerStarted","Data":"5b7b149a805a7b765650e9a705bbd0681806f910051df6e283acfbb0fc0f2268"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.427952 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" event={"ID":"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e","Type":"ContainerStarted","Data":"7423e3dcebb790545f0305b3f9fce0f7d23e4956c37b8826269846751a8d0f34"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.429605 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" event={"ID":"69e058b7-deda-4eb8-9cac-6bc08032b3bf","Type":"ContainerStarted","Data":"1ae82204a0c2274c0659af2e5428267d371f1879458bba6a0edf0f25a00af431"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.432116 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" event={"ID":"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00","Type":"ContainerStarted","Data":"0328eaa0d4243a3502dd840b6a1d42887c4794480293571aa7c7779794e3d6aa"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.433811 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" event={"ID":"bb50c219-6036-48d0-8568-0a1601150272","Type":"ContainerStarted","Data":"7da0bb9eeb3d855bd92746b9539116f8779ecdba706b903a2676618cf4c68f26"} Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.434093 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.435087 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.436973 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" event={"ID":"dd895dbf-b809-498c-95fd-dfd09a9eeb4d","Type":"ContainerStarted","Data":"0134ef08ebdc6ac48a0422f44b54e090e8f450fd82a29f23e9d1a2f6dcaca37e"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.438746 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" event={"ID":"1d30b9a6-fe73-4e32-9095-65b1950f7afe","Type":"ContainerStarted","Data":"3a270c1d93a85e5ffa09b35390f9099aba6ad921145527be4b582cc162e90c24"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.440750 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" event={"ID":"c3eec896-3441-4b0e-a7e5-4bde717dbccd","Type":"ContainerStarted","Data":"aac7f455d92c29349c242a510c3b523c81a6ab68e0f28304c608cdadfe1dd218"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.443009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" event={"ID":"350834d1-9352-4ca5-9c8a-acf60193ebc8","Type":"ContainerStarted","Data":"f18d7fdd58418ea7305af9f739af224561ccf2bd09247130a57c6c12ae209e46"} Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.476346 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.476465 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.485740 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" event={"ID":"67a8ae49-7f19-47bc-8e54-0873c535f6ff","Type":"ContainerStarted","Data":"dab369cb1a7120cc3279cb7b90b356e47e16203fe21c24eb8b79a975ce625f60"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.498356 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" event={"ID":"5e7c3856-3562-4cb4-b131-48302c43ce25","Type":"ContainerStarted","Data":"de7187d29402eb745534cad84336c5ef26b946b1ab9a9d5689042dc79fd8fe54"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.501130 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" event={"ID":"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1","Type":"ContainerStarted","Data":"1bf51e6ce804078573b612626123bf81dbd82c08fac6d6794dd04ba2424c2269"} Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.503450 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" podUID="ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.504680 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" event={"ID":"9a53674a-07ad-4bfc-80c8-f55bcc286eb0","Type":"ContainerStarted","Data":"a1c399dc6bbc4631454bc3a44e5698e330ef80e4f6107241ed8b9d628a35188e"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.511360 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" event={"ID":"5d977367-099f-4a10-bf37-9e9cd913932e","Type":"ContainerStarted","Data":"ed1ca11823afdb5f0a64cad0469be57d3366ca9a39346199f448b10aaf6e80d0"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.538380 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" event={"ID":"7a6dd1f5-d0b6-49a6-9270-dd98f2147932","Type":"ContainerStarted","Data":"afb6615c30e4a88eb2fe81d75326a416d399ffddf8946b4f560a511336817ccd"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.542787 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" event={"ID":"254d2d7e-3636-429d-b043-501d76db73e9","Type":"ContainerStarted","Data":"4ea354640a1254f7c8966a71053d42d6e7a9a595a2499d4fdea9100d0c8ab285"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.547671 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86"] Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.557385 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8bf6651_ff58_478c_be28_39732dac675b.slice/crio-5fc25a276ffe874b3674e2a9f29ca563e56bf0e282ac65486355543e07033581 WatchSource:0}: Error finding container 5fc25a276ffe874b3674e2a9f29ca563e56bf0e282ac65486355543e07033581: Status 404 returned error can't find the container with id 5fc25a276ffe874b3674e2a9f29ca563e56bf0e282ac65486355543e07033581 Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.557513 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" event={"ID":"739ed1d4-c090-4166-9352-d048e0b281d6","Type":"ContainerStarted","Data":"6f4ec9e30594b28e56a43fdf97b04497856213fa8e268d40161d49bd4efc84c5"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.562380 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" event={"ID":"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e","Type":"ContainerStarted","Data":"74523d9f27d8971efd2b6c156ee39a713cba2c5b665f111da13bc21f1a9ba5c4"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.567853 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" event={"ID":"74bafe89-dc08-4029-823c-f0c3579b8d6b","Type":"ContainerStarted","Data":"90b6e72ae1c9b65557d4a0f35e9dd37eb39fad85709630ccccb606d8afd43cb0"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.817758 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.818009 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.818135 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:16.818106792 +0000 UTC m=+941.384410616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: I0130 10:27:15.124299 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:15 crc kubenswrapper[4984]: I0130 10:27:15.124493 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.124810 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.124906 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:17.124871717 +0000 UTC m=+941.691175541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.125758 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.125811 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:17.125799462 +0000 UTC m=+941.692103286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: I0130 10:27:15.582974 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" event={"ID":"e8bf6651-ff58-478c-be28-39732dac675b","Type":"ContainerStarted","Data":"5fc25a276ffe874b3674e2a9f29ca563e56bf0e282ac65486355543e07033581"} Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.585677 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.585996 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.586116 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" podUID="ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.586152 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.594714 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:16 crc kubenswrapper[4984]: I0130 10:27:16.353565 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:16 crc kubenswrapper[4984]: E0130 10:27:16.353999 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:16 crc kubenswrapper[4984]: E0130 10:27:16.354263 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:20.35423403 +0000 UTC m=+944.920537854 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:16 crc kubenswrapper[4984]: I0130 10:27:16.861796 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:16 crc kubenswrapper[4984]: E0130 10:27:16.862019 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:16 crc kubenswrapper[4984]: E0130 10:27:16.862066 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:20.862050368 +0000 UTC m=+945.428354192 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:17 crc kubenswrapper[4984]: I0130 10:27:17.165950 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:17 crc kubenswrapper[4984]: I0130 10:27:17.166086 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:17 crc kubenswrapper[4984]: E0130 10:27:17.166990 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:17 crc kubenswrapper[4984]: E0130 10:27:17.167078 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:21.167037937 +0000 UTC m=+945.733341851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:17 crc kubenswrapper[4984]: E0130 10:27:17.167418 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:17 crc kubenswrapper[4984]: E0130 10:27:17.167482 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:21.167471598 +0000 UTC m=+945.733775422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:20 crc kubenswrapper[4984]: I0130 10:27:20.417528 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:20 crc kubenswrapper[4984]: E0130 10:27:20.417711 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:20 crc kubenswrapper[4984]: E0130 10:27:20.417977 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:28.417953665 +0000 UTC m=+952.984257489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:20 crc kubenswrapper[4984]: I0130 10:27:20.924668 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:20 crc kubenswrapper[4984]: E0130 10:27:20.925014 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:20 crc kubenswrapper[4984]: E0130 10:27:20.925099 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:28.925079555 +0000 UTC m=+953.491383379 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:21 crc kubenswrapper[4984]: I0130 10:27:21.227511 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:21 crc kubenswrapper[4984]: I0130 10:27:21.227613 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:21 crc kubenswrapper[4984]: E0130 10:27:21.227727 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:21 crc kubenswrapper[4984]: E0130 10:27:21.227836 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:21 crc kubenswrapper[4984]: E0130 10:27:21.227858 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:29.227813834 +0000 UTC m=+953.794117658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:21 crc kubenswrapper[4984]: E0130 10:27:21.227931 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:29.227911667 +0000 UTC m=+953.794215491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.430782 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.446171 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.499379 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:28 crc kubenswrapper[4984]: E0130 10:27:28.677882 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 30 10:27:28 crc kubenswrapper[4984]: E0130 10:27:28.678050 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9nc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-zwc2t_openstack-operators(dd895dbf-b809-498c-95fd-dfd09a9eeb4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:28 crc kubenswrapper[4984]: E0130 10:27:28.679373 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" podUID="dd895dbf-b809-498c-95fd-dfd09a9eeb4d" Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.941695 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.944673 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.023331 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.254512 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.254649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.254722 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.254811 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:45.254789738 +0000 UTC m=+969.821093632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.258628 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.416590 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.417148 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hdbfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vpt86_openstack-operators(e8bf6651-ff58-478c-be28-39732dac675b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.419275 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" podUID="e8bf6651-ff58-478c-be28-39732dac675b" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.714572 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" podUID="dd895dbf-b809-498c-95fd-dfd09a9eeb4d" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.714589 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" podUID="e8bf6651-ff58-478c-be28-39732dac675b" Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.778374 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45"] Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.892123 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t5j55"] Jan 30 10:27:29 crc kubenswrapper[4984]: W0130 10:27:29.936525 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode420c57f_7248_4454_926f_48766e48236c.slice/crio-bb4f85c3ae053c67424955eb38536166b277888ef791be18b4e170abf45b76c8 WatchSource:0}: Error finding container bb4f85c3ae053c67424955eb38536166b277888ef791be18b4e170abf45b76c8: Status 404 returned error can't find the container with id bb4f85c3ae053c67424955eb38536166b277888ef791be18b4e170abf45b76c8 Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.710880 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" event={"ID":"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e","Type":"ContainerStarted","Data":"c810d9f7c7b37754bfd9d25467e388b6d651887fb467b51ebc2dbc2c5ad76d71"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.711192 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.718658 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" event={"ID":"1d30b9a6-fe73-4e32-9095-65b1950f7afe","Type":"ContainerStarted","Data":"057923cc00385f2e91b782f2b3ce6480838a580079af69fec71237a6dc419f82"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.718743 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.721021 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" event={"ID":"69e058b7-deda-4eb8-9cac-6bc08032b3bf","Type":"ContainerStarted","Data":"9919b8529b7b94f0d2cd488b22d849bf8e80dab6bfb38df53250f5414de3876f"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.721139 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.727071 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" event={"ID":"739ed1d4-c090-4166-9352-d048e0b281d6","Type":"ContainerStarted","Data":"72795c3611663a828744b98b297983704d6879e7d3c83583be4f0535d10fabea"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.727210 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.731211 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" event={"ID":"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e","Type":"ContainerStarted","Data":"b18d5a1e60121e2a1f8080ee04c697c414af14214cd834e17fb5d8559554d280"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.731267 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.734242 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" event={"ID":"3899fe05-64bb-48b9-88dc-2341ad9bc00b","Type":"ContainerStarted","Data":"ae3da3c684895e06c54e2cad65bb8a4cd315abd38d39d1b7f4eee622f0a79715"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.734374 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.735271 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" podStartSLOduration=4.246318389 podStartE2EDuration="18.73523572s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.169363185 +0000 UTC m=+938.735667009" lastFinishedPulling="2026-01-30 10:27:28.658280516 +0000 UTC m=+953.224584340" observedRunningTime="2026-01-30 10:27:30.731631095 +0000 UTC m=+955.297934919" watchObservedRunningTime="2026-01-30 10:27:30.73523572 +0000 UTC m=+955.301539544" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.742466 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" event={"ID":"254d2d7e-3636-429d-b043-501d76db73e9","Type":"ContainerStarted","Data":"55ea3a2fd2999264ef3e83df56f31ba705b340a7780375b4b7ec5e465ddf58d6"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.742607 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.747640 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" podStartSLOduration=4.03516058 podStartE2EDuration="18.747623516s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.947745131 +0000 UTC m=+938.514048955" lastFinishedPulling="2026-01-30 10:27:28.660208067 +0000 UTC m=+953.226511891" observedRunningTime="2026-01-30 10:27:30.747430531 +0000 UTC m=+955.313734365" watchObservedRunningTime="2026-01-30 10:27:30.747623516 +0000 UTC m=+955.313927340" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.755630 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" event={"ID":"5d977367-099f-4a10-bf37-9e9cd913932e","Type":"ContainerStarted","Data":"34c327ae5e06300019eb8678bd2841914b3258d52de175df2be4021f271d1283"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.755750 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.758565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" event={"ID":"9a53674a-07ad-4bfc-80c8-f55bcc286eb0","Type":"ContainerStarted","Data":"251e4a76d7e4cac0b436f06c235497525fdc90d721bc7932a4578af4410332c0"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.758997 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.771829 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" podStartSLOduration=3.088797299 podStartE2EDuration="18.771812183s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.698568472 +0000 UTC m=+938.264872286" lastFinishedPulling="2026-01-30 10:27:29.381583306 +0000 UTC m=+953.947887170" observedRunningTime="2026-01-30 10:27:30.768598809 +0000 UTC m=+955.334902633" watchObservedRunningTime="2026-01-30 10:27:30.771812183 +0000 UTC m=+955.338116007" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.801938 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" podStartSLOduration=3.519631501 podStartE2EDuration="18.801920526s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.099232029 +0000 UTC m=+938.665535853" lastFinishedPulling="2026-01-30 10:27:29.381521054 +0000 UTC m=+953.947824878" observedRunningTime="2026-01-30 10:27:30.80092748 +0000 UTC m=+955.367231304" watchObservedRunningTime="2026-01-30 10:27:30.801920526 +0000 UTC m=+955.368224360" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.819349 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" event={"ID":"7a6dd1f5-d0b6-49a6-9270-dd98f2147932","Type":"ContainerStarted","Data":"7a144195e705916159414100ed81046290642b8fb85fa4bb467ba9f97c474f09"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.820385 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.849835 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" event={"ID":"8d22f0a7-a541-405b-8146-fb098d02ddcc","Type":"ContainerStarted","Data":"013825f773fbc5742f79d7470bdf6d6f2fd898b0335d04e5b8e0d45b25964af2"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.851168 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" event={"ID":"5e7c3856-3562-4cb4-b131-48302c43ce25","Type":"ContainerStarted","Data":"95e8b6b081f0ca05e5be08fc430949478be2e4c08f82c2f80cf963627e1476ab"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.852017 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.857957 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" event={"ID":"e420c57f-7248-4454-926f-48766e48236c","Type":"ContainerStarted","Data":"bb4f85c3ae053c67424955eb38536166b277888ef791be18b4e170abf45b76c8"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.859020 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" event={"ID":"67a8ae49-7f19-47bc-8e54-0873c535f6ff","Type":"ContainerStarted","Data":"6bf15dac9dacb4c9039d9909d4d4395ac1b4ae61587e0c6c716280d6534f0869"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.859447 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.875639 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" podStartSLOduration=3.606670962 podStartE2EDuration="18.875617836s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.166387217 +0000 UTC m=+938.732691041" lastFinishedPulling="2026-01-30 10:27:29.435334091 +0000 UTC m=+954.001637915" observedRunningTime="2026-01-30 10:27:30.848389719 +0000 UTC m=+955.414693553" watchObservedRunningTime="2026-01-30 10:27:30.875617836 +0000 UTC m=+955.441921660" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.898576 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" event={"ID":"74bafe89-dc08-4029-823c-f0c3579b8d6b","Type":"ContainerStarted","Data":"465b51a4342cb655be167ec4d161daf299fffd38bcce758b4e92cade0e91eacf"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.899209 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.903222 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" podStartSLOduration=3.052173055 podStartE2EDuration="18.903208582s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.530430246 +0000 UTC m=+938.096734070" lastFinishedPulling="2026-01-30 10:27:29.381465773 +0000 UTC m=+953.947769597" observedRunningTime="2026-01-30 10:27:30.90047451 +0000 UTC m=+955.466778334" watchObservedRunningTime="2026-01-30 10:27:30.903208582 +0000 UTC m=+955.469512406" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.955700 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" podStartSLOduration=3.2467186359999998 podStartE2EDuration="18.955683043s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.686756501 +0000 UTC m=+938.253060315" lastFinishedPulling="2026-01-30 10:27:29.395720858 +0000 UTC m=+953.962024722" observedRunningTime="2026-01-30 10:27:30.931453906 +0000 UTC m=+955.497757730" watchObservedRunningTime="2026-01-30 10:27:30.955683043 +0000 UTC m=+955.521986887" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.956820 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" podStartSLOduration=3.226950135 podStartE2EDuration="18.956815003s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.664405162 +0000 UTC m=+938.230708986" lastFinishedPulling="2026-01-30 10:27:29.39426999 +0000 UTC m=+953.960573854" observedRunningTime="2026-01-30 10:27:30.953631399 +0000 UTC m=+955.519935223" watchObservedRunningTime="2026-01-30 10:27:30.956815003 +0000 UTC m=+955.523118827" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.994803 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" podStartSLOduration=3.916940309 podStartE2EDuration="18.994789913s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.339552975 +0000 UTC m=+938.905856799" lastFinishedPulling="2026-01-30 10:27:29.417402579 +0000 UTC m=+953.983706403" observedRunningTime="2026-01-30 10:27:30.992039621 +0000 UTC m=+955.558343445" watchObservedRunningTime="2026-01-30 10:27:30.994789913 +0000 UTC m=+955.561093737" Jan 30 10:27:31 crc kubenswrapper[4984]: I0130 10:27:31.031266 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" podStartSLOduration=3.565738134 podStartE2EDuration="19.031235992s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.928823173 +0000 UTC m=+938.495126987" lastFinishedPulling="2026-01-30 10:27:29.394321021 +0000 UTC m=+953.960624845" observedRunningTime="2026-01-30 10:27:31.027181656 +0000 UTC m=+955.593485480" watchObservedRunningTime="2026-01-30 10:27:31.031235992 +0000 UTC m=+955.597539816" Jan 30 10:27:31 crc kubenswrapper[4984]: I0130 10:27:31.061911 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" podStartSLOduration=3.940045598 podStartE2EDuration="19.06189645s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.536359822 +0000 UTC m=+938.102663656" lastFinishedPulling="2026-01-30 10:27:28.658210684 +0000 UTC m=+953.224514508" observedRunningTime="2026-01-30 10:27:31.060811161 +0000 UTC m=+955.627114985" watchObservedRunningTime="2026-01-30 10:27:31.06189645 +0000 UTC m=+955.628200274" Jan 30 10:27:31 crc kubenswrapper[4984]: I0130 10:27:31.095347 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" podStartSLOduration=3.8304897540000002 podStartE2EDuration="19.09533117s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.116656788 +0000 UTC m=+938.682960612" lastFinishedPulling="2026-01-30 10:27:29.381498204 +0000 UTC m=+953.947802028" observedRunningTime="2026-01-30 10:27:31.09193364 +0000 UTC m=+955.658237464" watchObservedRunningTime="2026-01-30 10:27:31.09533117 +0000 UTC m=+955.661634994" Jan 30 10:27:31 crc kubenswrapper[4984]: I0130 10:27:31.141220 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" podStartSLOduration=4.286296101 podStartE2EDuration="19.141201477s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.803296418 +0000 UTC m=+938.369600242" lastFinishedPulling="2026-01-30 10:27:28.658201794 +0000 UTC m=+953.224505618" observedRunningTime="2026-01-30 10:27:31.137418338 +0000 UTC m=+955.703722152" watchObservedRunningTime="2026-01-30 10:27:31.141201477 +0000 UTC m=+955.707505301" Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.000756 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.001128 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.001178 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.001894 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.001960 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e" gracePeriod=600 Jan 30 10:27:35 crc kubenswrapper[4984]: I0130 10:27:35.937294 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e" exitCode=0 Jan 30 10:27:35 crc kubenswrapper[4984]: I0130 10:27:35.937607 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e"} Jan 30 10:27:35 crc kubenswrapper[4984]: I0130 10:27:35.937683 4984 scope.go:117] "RemoveContainer" containerID="fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.738634 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.739564 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdkmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-r7hs4_openstack-operators(df5d4f32-b49b-46ea-8aac-a3b76b2f8f00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.740876 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.746975 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.791336 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.825090 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.838379 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.860546 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.896331 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.916519 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.916719 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fjvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-jvcvp_openstack-operators(c3eec896-3441-4b0e-a7e5-4bde717dbccd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.918092 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.928404 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.129214 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.148710 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.192269 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.294715 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.413899 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.615846 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.023095 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.023639 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tsr5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-28kkh_openstack-operators(bb50c219-6036-48d0-8568-0a1601150272): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.025194 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:44 crc kubenswrapper[4984]: I0130 10:27:44.107983 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.585750 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.585967 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrb2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-4lz58_openstack-operators(350834d1-9352-4ca5-9c8a-acf60193ebc8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.587308 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:27:45 crc kubenswrapper[4984]: I0130 10:27:45.329870 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:45 crc kubenswrapper[4984]: I0130 10:27:45.340645 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:45 crc kubenswrapper[4984]: I0130 10:27:45.459428 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mg6lk" Jan 30 10:27:45 crc kubenswrapper[4984]: I0130 10:27:45.465076 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:47 crc kubenswrapper[4984]: I0130 10:27:47.349275 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv"] Jan 30 10:27:47 crc kubenswrapper[4984]: W0130 10:27:47.353084 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87613c07_d864_4440_b31c_03c4bb3f8ce0.slice/crio-037d6a52123c3a71f3a39e730bfef2b5d1ee5f5085bb5723f78e1ac12848ab59 WatchSource:0}: Error finding container 037d6a52123c3a71f3a39e730bfef2b5d1ee5f5085bb5723f78e1ac12848ab59: Status 404 returned error can't find the container with id 037d6a52123c3a71f3a39e730bfef2b5d1ee5f5085bb5723f78e1ac12848ab59 Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.046927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" event={"ID":"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1","Type":"ContainerStarted","Data":"0cebf764529f6291567b24ce6a6df83a4714b25e31eec21fb7fcbbe5a2a61b17"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.047377 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.048593 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" event={"ID":"87613c07-d864-4440-b31c-03c4bb3f8ce0","Type":"ContainerStarted","Data":"4c7925caf0b0545a2a7a28fbcb4b948370484815a76708bbf3d088b5e0d31c05"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.048648 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" event={"ID":"87613c07-d864-4440-b31c-03c4bb3f8ce0","Type":"ContainerStarted","Data":"037d6a52123c3a71f3a39e730bfef2b5d1ee5f5085bb5723f78e1ac12848ab59"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.048736 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.050902 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" event={"ID":"dd895dbf-b809-498c-95fd-dfd09a9eeb4d","Type":"ContainerStarted","Data":"91134e293eaeb33fcfd4b39ce2a56035a34b72c90acdf0b7fdfa6d5176510ab7"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.051122 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.052344 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" event={"ID":"e8bf6651-ff58-478c-be28-39732dac675b","Type":"ContainerStarted","Data":"1af3c14bbcc5dd5b6f35eabaa5ebd5e6526d8d84671fe55401880aed7fd18d06"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.054774 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.056354 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" event={"ID":"8d22f0a7-a541-405b-8146-fb098d02ddcc","Type":"ContainerStarted","Data":"27408c10e7f614f45e5091a02e8d6153942c14ebf77a56107d2d0c7a118e9cd4"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.057259 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.058687 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" event={"ID":"e420c57f-7248-4454-926f-48766e48236c","Type":"ContainerStarted","Data":"08c94bcdb7901a89f9ae6dc972ad76bdad1f8d0cb624d0a677f72f9f3b6bd6a6"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.058841 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.064490 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" podStartSLOduration=3.321453522 podStartE2EDuration="36.064470861s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.178071584 +0000 UTC m=+938.744375408" lastFinishedPulling="2026-01-30 10:27:46.921088923 +0000 UTC m=+971.487392747" observedRunningTime="2026-01-30 10:27:48.061199955 +0000 UTC m=+972.627503809" watchObservedRunningTime="2026-01-30 10:27:48.064470861 +0000 UTC m=+972.630774675" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.093765 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" podStartSLOduration=35.093742352 podStartE2EDuration="35.093742352s" podCreationTimestamp="2026-01-30 10:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:27:48.087860647 +0000 UTC m=+972.654164481" watchObservedRunningTime="2026-01-30 10:27:48.093742352 +0000 UTC m=+972.660046186" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.139444 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" podStartSLOduration=19.013649058 podStartE2EDuration="36.139426834s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:29.795357458 +0000 UTC m=+954.361661282" lastFinishedPulling="2026-01-30 10:27:46.921135234 +0000 UTC m=+971.487439058" observedRunningTime="2026-01-30 10:27:48.13279968 +0000 UTC m=+972.699103514" watchObservedRunningTime="2026-01-30 10:27:48.139426834 +0000 UTC m=+972.705730658" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.151883 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" podStartSLOduration=3.214735243 podStartE2EDuration="36.151864622s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.943353365 +0000 UTC m=+938.509657189" lastFinishedPulling="2026-01-30 10:27:46.880482734 +0000 UTC m=+971.446786568" observedRunningTime="2026-01-30 10:27:48.15103746 +0000 UTC m=+972.717341284" watchObservedRunningTime="2026-01-30 10:27:48.151864622 +0000 UTC m=+972.718168446" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.170879 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" podStartSLOduration=19.190531335 podStartE2EDuration="36.170860282s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:29.940774306 +0000 UTC m=+954.507078130" lastFinishedPulling="2026-01-30 10:27:46.921103253 +0000 UTC m=+971.487407077" observedRunningTime="2026-01-30 10:27:48.166640431 +0000 UTC m=+972.732944275" watchObservedRunningTime="2026-01-30 10:27:48.170860282 +0000 UTC m=+972.737164116" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.198518 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" podStartSLOduration=2.827026442 podStartE2EDuration="35.198484589s" podCreationTimestamp="2026-01-30 10:27:13 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.559743272 +0000 UTC m=+939.126047106" lastFinishedPulling="2026-01-30 10:27:46.931201429 +0000 UTC m=+971.497505253" observedRunningTime="2026-01-30 10:27:48.191757472 +0000 UTC m=+972.758061306" watchObservedRunningTime="2026-01-30 10:27:48.198484589 +0000 UTC m=+972.764788433" Jan 30 10:27:53 crc kubenswrapper[4984]: I0130 10:27:53.104031 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:53 crc kubenswrapper[4984]: I0130 10:27:53.256710 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:55 crc kubenswrapper[4984]: E0130 10:27:55.092594 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:55 crc kubenswrapper[4984]: I0130 10:27:55.473985 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:56 crc kubenswrapper[4984]: E0130 10:27:56.096055 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:56 crc kubenswrapper[4984]: E0130 10:27:56.096735 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:58 crc kubenswrapper[4984]: I0130 10:27:58.506816 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:59 crc kubenswrapper[4984]: I0130 10:27:59.032703 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:59 crc kubenswrapper[4984]: E0130 10:27:59.093596 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:28:09 crc kubenswrapper[4984]: I0130 10:28:09.236236 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" event={"ID":"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00","Type":"ContainerStarted","Data":"8e917a20d18251cfbbd95595909c367bc476183eedd01a37805094362f38634e"} Jan 30 10:28:09 crc kubenswrapper[4984]: I0130 10:28:09.237044 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:28:09 crc kubenswrapper[4984]: I0130 10:28:09.261890 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podStartSLOduration=2.724782623 podStartE2EDuration="57.26186538s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.17185083 +0000 UTC m=+938.738154654" lastFinishedPulling="2026-01-30 10:28:08.708933587 +0000 UTC m=+993.275237411" observedRunningTime="2026-01-30 10:28:09.250493962 +0000 UTC m=+993.816797836" watchObservedRunningTime="2026-01-30 10:28:09.26186538 +0000 UTC m=+993.828169244" Jan 30 10:28:10 crc kubenswrapper[4984]: I0130 10:28:10.245830 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" event={"ID":"bb50c219-6036-48d0-8568-0a1601150272","Type":"ContainerStarted","Data":"481082a9ce09ea61d61beec3f2fb8f04bb2e40427e325fddb2ea017e5bcc9b79"} Jan 30 10:28:10 crc kubenswrapper[4984]: I0130 10:28:10.246601 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:28:10 crc kubenswrapper[4984]: I0130 10:28:10.271124 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podStartSLOduration=2.920296906 podStartE2EDuration="58.271097835s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.274746709 +0000 UTC m=+938.841050533" lastFinishedPulling="2026-01-30 10:28:09.625547598 +0000 UTC m=+994.191851462" observedRunningTime="2026-01-30 10:28:10.260840738 +0000 UTC m=+994.827144572" watchObservedRunningTime="2026-01-30 10:28:10.271097835 +0000 UTC m=+994.837401699" Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.252864 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" event={"ID":"c3eec896-3441-4b0e-a7e5-4bde717dbccd","Type":"ContainerStarted","Data":"7a1eb40f15a44d1d204404543de4b3aa80540515fb198a0230e65f5874ee4155"} Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.253491 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.254433 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" event={"ID":"350834d1-9352-4ca5-9c8a-acf60193ebc8","Type":"ContainerStarted","Data":"1cace836f7cd1c54c0cd73265d5730c3fa089f2c5f9db3dcc006af664f7e99ba"} Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.254751 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.274435 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podStartSLOduration=3.075359847 podStartE2EDuration="59.27442232s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.286971481 +0000 UTC m=+938.853275315" lastFinishedPulling="2026-01-30 10:28:10.486033964 +0000 UTC m=+995.052337788" observedRunningTime="2026-01-30 10:28:11.270872754 +0000 UTC m=+995.837176578" watchObservedRunningTime="2026-01-30 10:28:11.27442232 +0000 UTC m=+995.840726134" Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.294871 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podStartSLOduration=3.054818123 podStartE2EDuration="59.294842942s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.357931769 +0000 UTC m=+938.924235593" lastFinishedPulling="2026-01-30 10:28:10.597956588 +0000 UTC m=+995.164260412" observedRunningTime="2026-01-30 10:28:11.292703224 +0000 UTC m=+995.859007048" watchObservedRunningTime="2026-01-30 10:28:11.294842942 +0000 UTC m=+995.861146766" Jan 30 10:28:13 crc kubenswrapper[4984]: I0130 10:28:13.531377 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:28:23 crc kubenswrapper[4984]: I0130 10:28:23.360922 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:28:23 crc kubenswrapper[4984]: I0130 10:28:23.451779 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:28:23 crc kubenswrapper[4984]: I0130 10:28:23.587519 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.592067 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.596702 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.601176 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.601339 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.601525 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.601355 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-hf6x5" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.611942 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.680754 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.682026 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.687003 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.700053 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.717457 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.717550 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819547 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819590 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819659 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819704 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.821404 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.857409 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.920809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.920881 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.920964 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.921717 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.921951 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.924942 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.936325 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:40 crc kubenswrapper[4984]: I0130 10:28:40.001515 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:40 crc kubenswrapper[4984]: I0130 10:28:40.406832 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:28:40 crc kubenswrapper[4984]: W0130 10:28:40.416524 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d1746bb_5861_4f20_a9d0_af3129baffd4.slice/crio-f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e WatchSource:0}: Error finding container f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e: Status 404 returned error can't find the container with id f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e Jan 30 10:28:40 crc kubenswrapper[4984]: I0130 10:28:40.476029 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" event={"ID":"7d1746bb-5861-4f20-a9d0-af3129baffd4","Type":"ContainerStarted","Data":"f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e"} Jan 30 10:28:40 crc kubenswrapper[4984]: W0130 10:28:40.516725 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb253c369_a41e_47cb_af7e_0ca288023264.slice/crio-8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a WatchSource:0}: Error finding container 8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a: Status 404 returned error can't find the container with id 8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a Jan 30 10:28:40 crc kubenswrapper[4984]: I0130 10:28:40.519406 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:28:41 crc kubenswrapper[4984]: I0130 10:28:41.486577 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" event={"ID":"b253c369-a41e-47cb-af7e-0ca288023264","Type":"ContainerStarted","Data":"8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a"} Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.181489 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.204794 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.205800 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.229890 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.367082 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.367274 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.367447 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.468569 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.468646 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.468708 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.469558 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.470053 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.488046 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.505375 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.524663 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.526293 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.537789 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.544200 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.678480 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.678579 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.678663 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.780631 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.781102 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.781134 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.785378 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.788019 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.805082 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.851619 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.104072 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.366514 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.368113 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.372550 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4bdkz" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.373355 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.374860 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.375372 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.375418 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.375459 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.375855 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.382287 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.420232 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:28:43 crc kubenswrapper[4984]: W0130 10:28:43.421185 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf42e13a3_aadb_4dc7_aabb_5a769e2b0e2d.slice/crio-9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6 WatchSource:0}: Error finding container 9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6: Status 404 returned error can't find the container with id 9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6 Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525769 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525864 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525883 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525906 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525936 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526091 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526226 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526270 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526306 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526334 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.536583 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerStarted","Data":"ca1bcc36301121abb73f0c33aefe22f5b51a7c08d43dfde31a9e2410ca9c91c2"} Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.537837 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" event={"ID":"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d","Type":"ContainerStarted","Data":"9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6"} Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627851 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627901 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627928 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627966 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627986 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628013 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628059 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628082 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628104 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628127 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628152 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628792 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.629574 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.629800 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.630172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.630631 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.633789 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.634868 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.635561 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.636857 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.642944 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.647235 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.652625 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.655617 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.657320 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664014 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664379 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dmx9d" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664501 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664647 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664716 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664989 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.672426 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.672539 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.698109 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832543 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832838 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832857 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832888 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832902 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832922 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832956 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832977 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832994 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.833016 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.833043 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934598 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934671 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934700 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934715 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934742 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934779 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934802 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934818 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934833 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934860 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.935288 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.935429 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.937487 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.937961 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.938107 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.940977 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.941141 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.941270 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.946540 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.953476 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.960434 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.967722 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.013871 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.195461 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:28:44 crc kubenswrapper[4984]: W0130 10:28:44.217165 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e0c1fc2_7876_468d_86b8_7348a8418ee9.slice/crio-bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282 WatchSource:0}: Error finding container bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282: Status 404 returned error can't find the container with id bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282 Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.452691 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.563318 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerStarted","Data":"bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282"} Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.995604 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.009444 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.009571 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.011561 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.015857 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2wxch" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.016028 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.017639 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.018938 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165588 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165640 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165671 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165690 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165712 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165735 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165763 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165816 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzcq\" (UniqueName: \"kubernetes.io/projected/c4717968-368b-4b9d-acca-b2aee21abd1f-kube-api-access-wlzcq\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.266996 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlzcq\" (UniqueName: \"kubernetes.io/projected/c4717968-368b-4b9d-acca-b2aee21abd1f-kube-api-access-wlzcq\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267101 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267132 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267158 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267181 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267209 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267234 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267292 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.269850 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.270000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.270024 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.270320 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.271798 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.274212 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.287951 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.291903 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlzcq\" (UniqueName: \"kubernetes.io/projected/c4717968-368b-4b9d-acca-b2aee21abd1f-kube-api-access-wlzcq\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.310137 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.336753 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.354302 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.355460 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.357281 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.357936 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.358145 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.358302 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-l87c5" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.362774 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493387 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493449 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493467 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwqv\" (UniqueName: \"kubernetes.io/projected/66296a3e-33af-496f-a870-9d0932aa4178-kube-api-access-5mwqv\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493491 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493509 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66296a3e-33af-496f-a870-9d0932aa4178-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493537 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493573 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493588 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.594639 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.594934 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwqv\" (UniqueName: \"kubernetes.io/projected/66296a3e-33af-496f-a870-9d0932aa4178-kube-api-access-5mwqv\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.594970 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595113 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66296a3e-33af-496f-a870-9d0932aa4178-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595286 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595432 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595455 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595606 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.596529 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.596633 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66296a3e-33af-496f-a870-9d0932aa4178-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.596774 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.596871 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.597551 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.606799 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.613368 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.621152 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwqv\" (UniqueName: \"kubernetes.io/projected/66296a3e-33af-496f-a870-9d0932aa4178-kube-api-access-5mwqv\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.634526 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.669710 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.732964 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.748576 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.748688 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.752365 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-n86x5" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.752555 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.753385 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904086 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904398 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-config-data\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904458 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-kolla-config\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904490 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904507 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjcr\" (UniqueName: \"kubernetes.io/projected/ab30531b-1df7-460e-956c-bc849792098b-kube-api-access-vxjcr\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010367 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-config-data\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010529 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-kolla-config\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010591 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010609 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxjcr\" (UniqueName: \"kubernetes.io/projected/ab30531b-1df7-460e-956c-bc849792098b-kube-api-access-vxjcr\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010672 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.011783 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-config-data\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.011793 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-kolla-config\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.015706 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.021022 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.028020 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxjcr\" (UniqueName: \"kubernetes.io/projected/ab30531b-1df7-460e-956c-bc849792098b-kube-api-access-vxjcr\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.112395 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 10:28:48 crc kubenswrapper[4984]: I0130 10:28:48.876888 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:28:48 crc kubenswrapper[4984]: I0130 10:28:48.877805 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:28:48 crc kubenswrapper[4984]: I0130 10:28:48.885180 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-2q58b" Jan 30 10:28:48 crc kubenswrapper[4984]: I0130 10:28:48.894289 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.042611 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") pod \"kube-state-metrics-0\" (UID: \"2d180dfe-bc61-4961-b672-20c6ff8c2911\") " pod="openstack/kube-state-metrics-0" Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.145696 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") pod \"kube-state-metrics-0\" (UID: \"2d180dfe-bc61-4961-b672-20c6ff8c2911\") " pod="openstack/kube-state-metrics-0" Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.170030 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") pod \"kube-state-metrics-0\" (UID: \"2d180dfe-bc61-4961-b672-20c6ff8c2911\") " pod="openstack/kube-state-metrics-0" Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.203672 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:28:49 crc kubenswrapper[4984]: W0130 10:28:49.475841 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d00f70a_4071_4375_81f3_45e7aab83cd3.slice/crio-9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83 WatchSource:0}: Error finding container 9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83: Status 404 returned error can't find the container with id 9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83 Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.617660 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerStarted","Data":"9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83"} Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.197716 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-js4wt"] Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.199561 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.204232 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7sxg4" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.204739 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.206157 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-m4spx"] Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.207098 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.208427 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.223239 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4spx"] Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.233069 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-js4wt"] Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.304732 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-run\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.304787 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63184ee8-263b-4506-8844-4ae4fd2a80c7-scripts\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.304944 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-etc-ovs\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305025 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305050 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-ovn-controller-tls-certs\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305082 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2590fda-d6e0-4182-96ef-8326001108d9-scripts\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305100 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-combined-ca-bundle\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305133 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpzq\" (UniqueName: \"kubernetes.io/projected/c2590fda-d6e0-4182-96ef-8326001108d9-kube-api-access-nhpzq\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305161 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-log-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305192 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-log\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305240 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-lib\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305311 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgs95\" (UniqueName: \"kubernetes.io/projected/63184ee8-263b-4506-8844-4ae4fd2a80c7-kube-api-access-lgs95\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305388 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407016 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-etc-ovs\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407099 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407125 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-ovn-controller-tls-certs\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407149 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2590fda-d6e0-4182-96ef-8326001108d9-scripts\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407163 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-combined-ca-bundle\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407186 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpzq\" (UniqueName: \"kubernetes.io/projected/c2590fda-d6e0-4182-96ef-8326001108d9-kube-api-access-nhpzq\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-log-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407227 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-log\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407269 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-lib\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407289 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgs95\" (UniqueName: \"kubernetes.io/projected/63184ee8-263b-4506-8844-4ae4fd2a80c7-kube-api-access-lgs95\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407601 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-run\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407637 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63184ee8-263b-4506-8844-4ae4fd2a80c7-scripts\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407665 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-etc-ovs\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407840 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407967 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-log\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.409984 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2590fda-d6e0-4182-96ef-8326001108d9-scripts\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.409994 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63184ee8-263b-4506-8844-4ae4fd2a80c7-scripts\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.410074 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-log-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.410128 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-lib\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.410157 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.410172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-run\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.415872 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-combined-ca-bundle\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.422725 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-ovn-controller-tls-certs\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.424216 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgs95\" (UniqueName: \"kubernetes.io/projected/63184ee8-263b-4506-8844-4ae4fd2a80c7-kube-api-access-lgs95\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.437219 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpzq\" (UniqueName: \"kubernetes.io/projected/c2590fda-d6e0-4182-96ef-8326001108d9-kube-api-access-nhpzq\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.532796 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.549861 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.081716 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.083110 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.086143 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.088313 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.088395 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hb4nh" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.088444 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.088562 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.096082 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.218847 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220144 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220211 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnf4c\" (UniqueName: \"kubernetes.io/projected/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-kube-api-access-nnf4c\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220296 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-config\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220341 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220370 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220395 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220428 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321427 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321487 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321515 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321546 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321619 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321685 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnf4c\" (UniqueName: \"kubernetes.io/projected/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-kube-api-access-nnf4c\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321732 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-config\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.322305 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.322650 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-config\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.322989 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.323909 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.325832 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.335063 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.342514 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.344149 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.350209 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnf4c\" (UniqueName: \"kubernetes.io/projected/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-kube-api-access-nnf4c\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.423096 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.336620 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.338718 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.341050 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.341398 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wgztt" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.341797 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.341981 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.343433 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.496991 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497080 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497122 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497146 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-config\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497174 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497408 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjbk\" (UniqueName: \"kubernetes.io/projected/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-kube-api-access-qsjbk\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497526 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497576 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.599833 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.599937 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-config\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.599999 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600066 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsjbk\" (UniqueName: \"kubernetes.io/projected/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-kube-api-access-qsjbk\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600133 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600157 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600386 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600465 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600889 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.601159 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.601447 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-config\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.601809 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.608179 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.614981 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.616141 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.621895 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsjbk\" (UniqueName: \"kubernetes.io/projected/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-kube-api-access-qsjbk\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.623483 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.663542 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.848031 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.848460 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kq552,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-22gp8_openstack(f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.849675 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" podUID="f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.860737 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.860925 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8hst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mvnjm_openstack(7d1746bb-5861-4f20-a9d0-af3129baffd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.862178 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" podUID="7d1746bb-5861-4f20-a9d0-af3129baffd4" Jan 30 10:29:04 crc kubenswrapper[4984]: E0130 10:29:04.716357 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" podUID="f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" Jan 30 10:29:04 crc kubenswrapper[4984]: E0130 10:29:04.996888 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 10:29:04 crc kubenswrapper[4984]: E0130 10:29:04.997576 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5v5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-nt5m5_openstack(8939f3c8-3f71-4369-b30a-1ce52517ec33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:29:04 crc kubenswrapper[4984]: E0130 10:29:04.999216 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" Jan 30 10:29:05 crc kubenswrapper[4984]: E0130 10:29:05.000558 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 10:29:05 crc kubenswrapper[4984]: E0130 10:29:05.001084 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78qfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-mmlmd_openstack(b253c369-a41e-47cb-af7e-0ca288023264): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:29:05 crc kubenswrapper[4984]: E0130 10:29:05.002299 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" podUID="b253c369-a41e-47cb-af7e-0ca288023264" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.286053 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.372444 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") pod \"7d1746bb-5861-4f20-a9d0-af3129baffd4\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.372501 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") pod \"7d1746bb-5861-4f20-a9d0-af3129baffd4\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.372543 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") pod \"7d1746bb-5861-4f20-a9d0-af3129baffd4\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.373231 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config" (OuterVolumeSpecName: "config") pod "7d1746bb-5861-4f20-a9d0-af3129baffd4" (UID: "7d1746bb-5861-4f20-a9d0-af3129baffd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.376643 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d1746bb-5861-4f20-a9d0-af3129baffd4" (UID: "7d1746bb-5861-4f20-a9d0-af3129baffd4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.382945 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst" (OuterVolumeSpecName: "kube-api-access-h8hst") pod "7d1746bb-5861-4f20-a9d0-af3129baffd4" (UID: "7d1746bb-5861-4f20-a9d0-af3129baffd4"). InnerVolumeSpecName "kube-api-access-h8hst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.473784 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.473821 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.473837 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.718969 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.727672 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.728498 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" event={"ID":"7d1746bb-5861-4f20-a9d0-af3129baffd4","Type":"ContainerDied","Data":"f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e"} Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.728580 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:29:05 crc kubenswrapper[4984]: E0130 10:29:05.731545 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.736658 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.744418 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.851276 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.853816 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.892426 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: W0130 10:29:05.987384 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53bd6a11_6ac6_4b0e_ae41_8afd88f351e6.slice/crio-17a8133c2335f8dd4b4760d66b66f9047406fe492dc811dfff1cc0ca72121a69 WatchSource:0}: Error finding container 17a8133c2335f8dd4b4760d66b66f9047406fe492dc811dfff1cc0ca72121a69: Status 404 returned error can't find the container with id 17a8133c2335f8dd4b4760d66b66f9047406fe492dc811dfff1cc0ca72121a69 Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.110979 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1746bb-5861-4f20-a9d0-af3129baffd4" path="/var/lib/kubelet/pods/7d1746bb-5861-4f20-a9d0-af3129baffd4/volumes" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.163309 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4spx"] Jan 30 10:29:06 crc kubenswrapper[4984]: W0130 10:29:06.182890 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63184ee8_263b_4506_8844_4ae4fd2a80c7.slice/crio-0f1cecddf7931637e4df3ea787e37db2329ac6bf0f1c4ce46cacc47ad680b057 WatchSource:0}: Error finding container 0f1cecddf7931637e4df3ea787e37db2329ac6bf0f1c4ce46cacc47ad680b057: Status 404 returned error can't find the container with id 0f1cecddf7931637e4df3ea787e37db2329ac6bf0f1c4ce46cacc47ad680b057 Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.347849 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.356154 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.404795 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") pod \"b253c369-a41e-47cb-af7e-0ca288023264\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.404956 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") pod \"b253c369-a41e-47cb-af7e-0ca288023264\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.405915 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config" (OuterVolumeSpecName: "config") pod "b253c369-a41e-47cb-af7e-0ca288023264" (UID: "b253c369-a41e-47cb-af7e-0ca288023264"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.412939 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk" (OuterVolumeSpecName: "kube-api-access-78qfk") pod "b253c369-a41e-47cb-af7e-0ca288023264" (UID: "b253c369-a41e-47cb-af7e-0ca288023264"). InnerVolumeSpecName "kube-api-access-78qfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.418207 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-js4wt"] Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.507352 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.507657 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:06 crc kubenswrapper[4984]: W0130 10:29:06.508640 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2590fda_d6e0_4182_96ef_8326001108d9.slice/crio-eff677dd09e0383d106502bd39e945d4fd928a4c1720d4becf0f498ba4e46dd1 WatchSource:0}: Error finding container eff677dd09e0383d106502bd39e945d4fd928a4c1720d4becf0f498ba4e46dd1: Status 404 returned error can't find the container with id eff677dd09e0383d106502bd39e945d4fd928a4c1720d4becf0f498ba4e46dd1 Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.736955 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d180dfe-bc61-4961-b672-20c6ff8c2911","Type":"ContainerStarted","Data":"f05fd5917bae61700291c3765574cc3a3b08139624adb6fb3ccd5f7058c55fa6"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.738642 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4717968-368b-4b9d-acca-b2aee21abd1f","Type":"ContainerStarted","Data":"d7558e6899b57453defe5a8f8e0b329c15d0ab1ed9c92562f548313b7fd4ee8d"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.740026 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab30531b-1df7-460e-956c-bc849792098b","Type":"ContainerStarted","Data":"dbeae79b557a4d5d76adcfd6e3c34ce51742ab31d0cd794bd43ac8a4800773e7"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.741146 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" event={"ID":"b253c369-a41e-47cb-af7e-0ca288023264","Type":"ContainerDied","Data":"8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.741222 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.759743 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6","Type":"ContainerStarted","Data":"17a8133c2335f8dd4b4760d66b66f9047406fe492dc811dfff1cc0ca72121a69"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.762081 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-js4wt" event={"ID":"c2590fda-d6e0-4182-96ef-8326001108d9","Type":"ContainerStarted","Data":"eff677dd09e0383d106502bd39e945d4fd928a4c1720d4becf0f498ba4e46dd1"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.768134 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerStarted","Data":"627e3b8cc5def8235dcb65072da12abbb346c0ddb7f3ece2aa1c597e5e7a4e73"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.770435 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66296a3e-33af-496f-a870-9d0932aa4178","Type":"ContainerStarted","Data":"c5002e46986f6f4c451f9468cadd7f7ba7e729210f83d0f4878b290572656b1c"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.772489 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerStarted","Data":"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.777220 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx" event={"ID":"63184ee8-263b-4506-8844-4ae4fd2a80c7","Type":"ContainerStarted","Data":"0f1cecddf7931637e4df3ea787e37db2329ac6bf0f1c4ce46cacc47ad680b057"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.781383 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4","Type":"ContainerStarted","Data":"f233401272b163e48dbfe75166acec0b6153e73e21dd1ce1fe5fe3a64aef2476"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.802386 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.807582 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:29:08 crc kubenswrapper[4984]: I0130 10:29:08.100235 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b253c369-a41e-47cb-af7e-0ca288023264" path="/var/lib/kubelet/pods/b253c369-a41e-47cb-af7e-0ca288023264/volumes" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.844345 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx" event={"ID":"63184ee8-263b-4506-8844-4ae4fd2a80c7","Type":"ContainerStarted","Data":"295e903f6c433d1b4cc0fab8c84b49aafdbbd5dd9e51ac452ad347ad8a6d5804"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.845672 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-m4spx" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.847055 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab30531b-1df7-460e-956c-bc849792098b","Type":"ContainerStarted","Data":"af5568469e16ed4d5014b0cc648521c552336374ab29b1f325282d9543145451"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.847596 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.850348 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4","Type":"ContainerStarted","Data":"a9549e086d365a8c2c24259a2c7d9d56ea980f46e6d4c7af42d444de5cb1f4d6"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.852961 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6","Type":"ContainerStarted","Data":"1471d69f10e0783bcf37f9e8756fb9a0077c5bfb22b7ff96a50716dd478c6f8b"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.855894 4984 generic.go:334] "Generic (PLEG): container finished" podID="c2590fda-d6e0-4182-96ef-8326001108d9" containerID="9d4349be33beb7e3ee06b04f992d8ded7fcf2e6cf09b9d5e05f0eebd74d32355" exitCode=0 Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.856031 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-js4wt" event={"ID":"c2590fda-d6e0-4182-96ef-8326001108d9","Type":"ContainerDied","Data":"9d4349be33beb7e3ee06b04f992d8ded7fcf2e6cf09b9d5e05f0eebd74d32355"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.859867 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66296a3e-33af-496f-a870-9d0932aa4178","Type":"ContainerStarted","Data":"a107e2db3ea7fc6f394e67a766834a0581b7be8428c4fdc44437d53c65ecc69a"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.863881 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d180dfe-bc61-4961-b672-20c6ff8c2911","Type":"ContainerStarted","Data":"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.864504 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.867571 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4717968-368b-4b9d-acca-b2aee21abd1f","Type":"ContainerStarted","Data":"f4fb50e73d0b56b05a1b621ee974af085fa179029c705501759af7b338c19d68"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.880374 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-m4spx" podStartSLOduration=15.103284748 podStartE2EDuration="21.88033621s" podCreationTimestamp="2026-01-30 10:28:52 +0000 UTC" firstStartedPulling="2026-01-30 10:29:06.197426197 +0000 UTC m=+1050.763730021" lastFinishedPulling="2026-01-30 10:29:12.974477609 +0000 UTC m=+1057.540781483" observedRunningTime="2026-01-30 10:29:13.870215947 +0000 UTC m=+1058.436519821" watchObservedRunningTime="2026-01-30 10:29:13.88033621 +0000 UTC m=+1058.446640074" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.969894 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.661802608 podStartE2EDuration="25.9698614s" podCreationTimestamp="2026-01-30 10:28:48 +0000 UTC" firstStartedPulling="2026-01-30 10:29:05.72388684 +0000 UTC m=+1050.290190674" lastFinishedPulling="2026-01-30 10:29:13.031945632 +0000 UTC m=+1057.598249466" observedRunningTime="2026-01-30 10:29:13.951885314 +0000 UTC m=+1058.518189168" watchObservedRunningTime="2026-01-30 10:29:13.9698614 +0000 UTC m=+1058.536165264" Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.011847 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.228699738 podStartE2EDuration="28.011813693s" podCreationTimestamp="2026-01-30 10:28:46 +0000 UTC" firstStartedPulling="2026-01-30 10:29:05.757309964 +0000 UTC m=+1050.323613788" lastFinishedPulling="2026-01-30 10:29:12.540423909 +0000 UTC m=+1057.106727743" observedRunningTime="2026-01-30 10:29:14.009955793 +0000 UTC m=+1058.576259637" watchObservedRunningTime="2026-01-30 10:29:14.011813693 +0000 UTC m=+1058.578117527" Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.882946 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-js4wt" event={"ID":"c2590fda-d6e0-4182-96ef-8326001108d9","Type":"ContainerStarted","Data":"d8f0feffdfd7574a9a8180a31875276b60a877836a0b9fb1fbeaca48e0525762"} Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.882983 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-js4wt" event={"ID":"c2590fda-d6e0-4182-96ef-8326001108d9","Type":"ContainerStarted","Data":"c5fb323d6241bd3adc240c025ac0452faeea862ad83f4407e1c834d1c021318a"} Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.883013 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.883030 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.912044 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-js4wt" podStartSLOduration=16.46240751 podStartE2EDuration="22.912026702s" podCreationTimestamp="2026-01-30 10:28:52 +0000 UTC" firstStartedPulling="2026-01-30 10:29:06.513443148 +0000 UTC m=+1051.079746972" lastFinishedPulling="2026-01-30 10:29:12.96306234 +0000 UTC m=+1057.529366164" observedRunningTime="2026-01-30 10:29:14.910162332 +0000 UTC m=+1059.476466146" watchObservedRunningTime="2026-01-30 10:29:14.912026702 +0000 UTC m=+1059.478330516" Jan 30 10:29:15 crc kubenswrapper[4984]: I0130 10:29:15.901174 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4","Type":"ContainerStarted","Data":"c68f05a2b480f4ae854f8b2225307b3c4c1f995e37322d7ecdb36aacfd1dba7c"} Jan 30 10:29:15 crc kubenswrapper[4984]: I0130 10:29:15.907677 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6","Type":"ContainerStarted","Data":"c2df16dd3118c3ba5aff82621c2efd91e81fefc5fbe2d19e285e4a56a869f07d"} Jan 30 10:29:15 crc kubenswrapper[4984]: I0130 10:29:15.953553 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=14.820488816 podStartE2EDuration="23.953524119s" podCreationTimestamp="2026-01-30 10:28:52 +0000 UTC" firstStartedPulling="2026-01-30 10:29:06.428519943 +0000 UTC m=+1050.994823767" lastFinishedPulling="2026-01-30 10:29:15.561555246 +0000 UTC m=+1060.127859070" observedRunningTime="2026-01-30 10:29:15.934576907 +0000 UTC m=+1060.500880741" watchObservedRunningTime="2026-01-30 10:29:15.953524119 +0000 UTC m=+1060.519827963" Jan 30 10:29:15 crc kubenswrapper[4984]: I0130 10:29:15.973117 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.411352038 podStartE2EDuration="20.973090818s" podCreationTimestamp="2026-01-30 10:28:55 +0000 UTC" firstStartedPulling="2026-01-30 10:29:06.01252755 +0000 UTC m=+1050.578831374" lastFinishedPulling="2026-01-30 10:29:15.57426633 +0000 UTC m=+1060.140570154" observedRunningTime="2026-01-30 10:29:15.969670116 +0000 UTC m=+1060.535973970" watchObservedRunningTime="2026-01-30 10:29:15.973090818 +0000 UTC m=+1060.539394652" Jan 30 10:29:16 crc kubenswrapper[4984]: I0130 10:29:16.664810 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:16 crc kubenswrapper[4984]: I0130 10:29:16.922308 4984 generic.go:334] "Generic (PLEG): container finished" podID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerID="757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf" exitCode=0 Jan 30 10:29:16 crc kubenswrapper[4984]: I0130 10:29:16.923537 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerDied","Data":"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf"} Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.423750 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.496667 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.664128 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.738053 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.934057 4984 generic.go:334] "Generic (PLEG): container finished" podID="c4717968-368b-4b9d-acca-b2aee21abd1f" containerID="f4fb50e73d0b56b05a1b621ee974af085fa179029c705501759af7b338c19d68" exitCode=0 Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.934187 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4717968-368b-4b9d-acca-b2aee21abd1f","Type":"ContainerDied","Data":"f4fb50e73d0b56b05a1b621ee974af085fa179029c705501759af7b338c19d68"} Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.939516 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerStarted","Data":"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea"} Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.940027 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.949326 4984 generic.go:334] "Generic (PLEG): container finished" podID="66296a3e-33af-496f-a870-9d0932aa4178" containerID="a107e2db3ea7fc6f394e67a766834a0581b7be8428c4fdc44437d53c65ecc69a" exitCode=0 Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.949370 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66296a3e-33af-496f-a870-9d0932aa4178","Type":"ContainerDied","Data":"a107e2db3ea7fc6f394e67a766834a0581b7be8428c4fdc44437d53c65ecc69a"} Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.950497 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.003862 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" podStartSLOduration=2.637354001 podStartE2EDuration="36.003838579s" podCreationTimestamp="2026-01-30 10:28:42 +0000 UTC" firstStartedPulling="2026-01-30 10:28:43.168390541 +0000 UTC m=+1027.734694365" lastFinishedPulling="2026-01-30 10:29:16.534875119 +0000 UTC m=+1061.101178943" observedRunningTime="2026-01-30 10:29:18.000664083 +0000 UTC m=+1062.566967947" watchObservedRunningTime="2026-01-30 10:29:18.003838579 +0000 UTC m=+1062.570142413" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.484742 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.794410 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.846346 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ms66d"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.848533 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.855781 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.865112 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.867231 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.873054 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.877896 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.893363 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ms66d"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948109 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948173 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948199 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-combined-ca-bundle\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948225 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4r5\" (UniqueName: \"kubernetes.io/projected/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-kube-api-access-7x4r5\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948273 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovs-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948301 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948317 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948335 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovn-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948365 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-config\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948391 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.967047 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66296a3e-33af-496f-a870-9d0932aa4178","Type":"ContainerStarted","Data":"02f0662bf68eae537cc49b2c3b8ac8f5215a58dc0cced5a9f890f2fda42996ff"} Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.972341 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4717968-368b-4b9d-acca-b2aee21abd1f","Type":"ContainerStarted","Data":"64848261151fdbc4cd8e571028717ed6924c136b8ae75bd564d260cc390d517c"} Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.992496 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.772420484 podStartE2EDuration="33.992465077s" podCreationTimestamp="2026-01-30 10:28:45 +0000 UTC" firstStartedPulling="2026-01-30 10:29:05.743050638 +0000 UTC m=+1050.309354462" lastFinishedPulling="2026-01-30 10:29:12.963095201 +0000 UTC m=+1057.529399055" observedRunningTime="2026-01-30 10:29:18.988470259 +0000 UTC m=+1063.554774083" watchObservedRunningTime="2026-01-30 10:29:18.992465077 +0000 UTC m=+1063.558768901" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.015982 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.782402534 podStartE2EDuration="36.015964192s" podCreationTimestamp="2026-01-30 10:28:43 +0000 UTC" firstStartedPulling="2026-01-30 10:29:05.74273191 +0000 UTC m=+1050.309035734" lastFinishedPulling="2026-01-30 10:29:12.976293568 +0000 UTC m=+1057.542597392" observedRunningTime="2026-01-30 10:29:19.01069786 +0000 UTC m=+1063.577001684" watchObservedRunningTime="2026-01-30 10:29:19.015964192 +0000 UTC m=+1063.582268006" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.019133 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049475 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049527 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-combined-ca-bundle\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049560 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4r5\" (UniqueName: \"kubernetes.io/projected/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-kube-api-access-7x4r5\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049592 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovs-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049624 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049642 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049661 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovn-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049689 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-config\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049704 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049749 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.051698 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.053608 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovs-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.054161 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.055111 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-config\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.055148 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.055219 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovn-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.066479 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-combined-ca-bundle\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.077921 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.084673 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4r5\" (UniqueName: \"kubernetes.io/projected/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-kube-api-access-7x4r5\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.086812 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.142518 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.162311 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.166701 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.170921 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.173737 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.191117 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.203004 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.232074 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.262505 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.262615 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.263716 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.263780 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.264284 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.292596 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.366955 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") pod \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.367011 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") pod \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.369328 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") pod \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.369408 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config" (OuterVolumeSpecName: "config") pod "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" (UID: "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.369778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" (UID: "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.372935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.371243 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.374067 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552" (OuterVolumeSpecName: "kube-api-access-kq552") pod "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" (UID: "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d"). InnerVolumeSpecName "kube-api-access-kq552". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375072 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375534 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375616 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375640 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375710 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375724 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375738 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.377992 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.379773 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.382550 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.406129 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.426241 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.429279 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433195 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433476 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433768 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wpfh5" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433849 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433954 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580488 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-config\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580565 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580598 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-scripts\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580677 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvbf\" (UniqueName: \"kubernetes.io/projected/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-kube-api-access-vpvbf\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580713 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580766 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580927 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.588463 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682278 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682335 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682402 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-config\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682435 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682466 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-scripts\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682498 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvbf\" (UniqueName: \"kubernetes.io/projected/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-kube-api-access-vpvbf\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682525 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.683665 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.683856 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-scripts\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.683886 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-config\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.688339 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.688646 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.693613 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.704384 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvbf\" (UniqueName: \"kubernetes.io/projected/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-kube-api-access-vpvbf\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.757018 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: W0130 10:29:19.760183 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbcc0b77_42fd_47ec_9b91_94e2c070c0ec.slice/crio-384aa0352eee015703246bec88a26b2560927adee3b59b87a41d8f3d9abb9004 WatchSource:0}: Error finding container 384aa0352eee015703246bec88a26b2560927adee3b59b87a41d8f3d9abb9004: Status 404 returned error can't find the container with id 384aa0352eee015703246bec88a26b2560927adee3b59b87a41d8f3d9abb9004 Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.760510 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ms66d"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.824434 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:19 crc kubenswrapper[4984]: W0130 10:29:19.832744 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod364f1e33_f14a_4248_82d5_eca3ab3e36c3.slice/crio-6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c WatchSource:0}: Error finding container 6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c: Status 404 returned error can't find the container with id 6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.988651 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" event={"ID":"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d","Type":"ContainerDied","Data":"9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6"} Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.988925 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.991870 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ms66d" event={"ID":"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec","Type":"ContainerStarted","Data":"384aa0352eee015703246bec88a26b2560927adee3b59b87a41d8f3d9abb9004"} Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.994321 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerStarted","Data":"6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c"} Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.994971 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="dnsmasq-dns" containerID="cri-o://68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" gracePeriod=10 Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.029486 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:20 crc kubenswrapper[4984]: W0130 10:29:20.079037 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c8fd5e7_478c_498f_b9a6_5ad836cf08fa.slice/crio-619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46 WatchSource:0}: Error finding container 619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46: Status 404 returned error can't find the container with id 619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46 Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.220125 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 10:29:20 crc kubenswrapper[4984]: W0130 10:29:20.246168 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode86681f0_5ba9_45f2_b0b7_0b9a49dc6706.slice/crio-76c9744d1f0799131e2bfc5cdde2157d2d8cddb6d665518555edb52dd814d2c3 WatchSource:0}: Error finding container 76c9744d1f0799131e2bfc5cdde2157d2d8cddb6d665518555edb52dd814d2c3: Status 404 returned error can't find the container with id 76c9744d1f0799131e2bfc5cdde2157d2d8cddb6d665518555edb52dd814d2c3 Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.450560 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.497623 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") pod \"8939f3c8-3f71-4369-b30a-1ce52517ec33\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.497799 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") pod \"8939f3c8-3f71-4369-b30a-1ce52517ec33\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.497862 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") pod \"8939f3c8-3f71-4369-b30a-1ce52517ec33\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.501894 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f" (OuterVolumeSpecName: "kube-api-access-x5v5f") pod "8939f3c8-3f71-4369-b30a-1ce52517ec33" (UID: "8939f3c8-3f71-4369-b30a-1ce52517ec33"). InnerVolumeSpecName "kube-api-access-x5v5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.542602 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8939f3c8-3f71-4369-b30a-1ce52517ec33" (UID: "8939f3c8-3f71-4369-b30a-1ce52517ec33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.548974 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config" (OuterVolumeSpecName: "config") pod "8939f3c8-3f71-4369-b30a-1ce52517ec33" (UID: "8939f3c8-3f71-4369-b30a-1ce52517ec33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.599379 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.599485 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.600690 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.005403 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706","Type":"ContainerStarted","Data":"76c9744d1f0799131e2bfc5cdde2157d2d8cddb6d665518555edb52dd814d2c3"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.007865 4984 generic.go:334] "Generic (PLEG): container finished" podID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerID="f6e1b34b25853d4897e5be9605b68bf7a00d7cdea46fe63f164fe4950f791a05" exitCode=0 Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.007989 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerDied","Data":"f6e1b34b25853d4897e5be9605b68bf7a00d7cdea46fe63f164fe4950f791a05"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010471 4984 generic.go:334] "Generic (PLEG): container finished" podID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerID="68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" exitCode=0 Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010517 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerDied","Data":"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010600 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerDied","Data":"ca1bcc36301121abb73f0c33aefe22f5b51a7c08d43dfde31a9e2410ca9c91c2"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010623 4984 scope.go:117] "RemoveContainer" containerID="68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.012353 4984 generic.go:334] "Generic (PLEG): container finished" podID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerID="524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d" exitCode=0 Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.012436 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerDied","Data":"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.012472 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerStarted","Data":"619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.016296 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ms66d" event={"ID":"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec","Type":"ContainerStarted","Data":"07a4855467dd1c7eda641750a5733aa89bf776673d9119c09c08bdeced73253e"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.059096 4984 scope.go:117] "RemoveContainer" containerID="757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.134328 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.174871 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ms66d" podStartSLOduration=3.174848456 podStartE2EDuration="3.174848456s" podCreationTimestamp="2026-01-30 10:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:21.149003998 +0000 UTC m=+1065.715307822" watchObservedRunningTime="2026-01-30 10:29:21.174848456 +0000 UTC m=+1065.741152280" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.188881 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.309659 4984 scope.go:117] "RemoveContainer" containerID="68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" Jan 30 10:29:21 crc kubenswrapper[4984]: E0130 10:29:21.314472 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea\": container with ID starting with 68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea not found: ID does not exist" containerID="68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.314529 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea"} err="failed to get container status \"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea\": rpc error: code = NotFound desc = could not find container \"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea\": container with ID starting with 68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea not found: ID does not exist" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.314559 4984 scope.go:117] "RemoveContainer" containerID="757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf" Jan 30 10:29:21 crc kubenswrapper[4984]: E0130 10:29:21.316762 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf\": container with ID starting with 757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf not found: ID does not exist" containerID="757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.316782 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf"} err="failed to get container status \"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf\": rpc error: code = NotFound desc = could not find container \"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf\": container with ID starting with 757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf not found: ID does not exist" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.027015 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerStarted","Data":"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934"} Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.027356 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.034545 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706","Type":"ContainerStarted","Data":"db62640365b168549f02a4fb4a3c9c3411cf7ca839ca1cf4ad02f485e4d98bf7"} Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.037196 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerStarted","Data":"2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4"} Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.075568 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" podStartSLOduration=4.075546178 podStartE2EDuration="4.075546178s" podCreationTimestamp="2026-01-30 10:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:22.073538864 +0000 UTC m=+1066.639842698" watchObservedRunningTime="2026-01-30 10:29:22.075546178 +0000 UTC m=+1066.641850002" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.075671 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-72z27" podStartSLOduration=3.075665821 podStartE2EDuration="3.075665821s" podCreationTimestamp="2026-01-30 10:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:22.055820125 +0000 UTC m=+1066.622123949" watchObservedRunningTime="2026-01-30 10:29:22.075665821 +0000 UTC m=+1066.641969645" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.103179 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" path="/var/lib/kubelet/pods/8939f3c8-3f71-4369-b30a-1ce52517ec33/volumes" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.113380 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 10:29:23 crc kubenswrapper[4984]: I0130 10:29:23.048331 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706","Type":"ContainerStarted","Data":"f2f18e96c846ab218b03d6e8ab1b11093c2d863bf621adabc996f658029479a6"} Jan 30 10:29:23 crc kubenswrapper[4984]: I0130 10:29:23.048809 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:23 crc kubenswrapper[4984]: I0130 10:29:23.072556 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.560899039 podStartE2EDuration="4.072535622s" podCreationTimestamp="2026-01-30 10:29:19 +0000 UTC" firstStartedPulling="2026-01-30 10:29:20.249752975 +0000 UTC m=+1064.816056799" lastFinishedPulling="2026-01-30 10:29:21.761389518 +0000 UTC m=+1066.327693382" observedRunningTime="2026-01-30 10:29:23.066668904 +0000 UTC m=+1067.632972738" watchObservedRunningTime="2026-01-30 10:29:23.072535622 +0000 UTC m=+1067.638839466" Jan 30 10:29:24 crc kubenswrapper[4984]: I0130 10:29:24.055727 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 10:29:25 crc kubenswrapper[4984]: I0130 10:29:25.338393 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 10:29:25 crc kubenswrapper[4984]: I0130 10:29:25.338465 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 10:29:25 crc kubenswrapper[4984]: I0130 10:29:25.432642 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.173186 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.670420 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.671599 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.720953 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:29:26 crc kubenswrapper[4984]: E0130 10:29:26.721463 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="dnsmasq-dns" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.721490 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="dnsmasq-dns" Jan 30 10:29:26 crc kubenswrapper[4984]: E0130 10:29:26.721549 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="init" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.721562 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="init" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.721874 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="dnsmasq-dns" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.722798 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.729239 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.731839 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.796308 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.797212 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.799730 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.824125 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.824289 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.925592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.925680 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.925767 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.925797 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.926381 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.936434 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.956873 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.026898 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.026987 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.027840 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.032182 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.033126 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.042447 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.043594 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.046982 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.048014 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.052763 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.058617 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.071572 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.121124 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.128227 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.128319 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.128389 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.128726 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.157511 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.230454 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.230602 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.230642 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.230672 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.231622 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.232848 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.255866 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.257044 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.348860 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.391312 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.546288 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:29:27 crc kubenswrapper[4984]: W0130 10:29:27.550227 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd118357_c4bf_43ef_a738_9fcd6b07aac4.slice/crio-40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265 WatchSource:0}: Error finding container 40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265: Status 404 returned error can't find the container with id 40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265 Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.659520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:29:27 crc kubenswrapper[4984]: W0130 10:29:27.660565 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode83eb734_fae0_40ac_85db_8f8c8fb26133.slice/crio-bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9 WatchSource:0}: Error finding container bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9: Status 404 returned error can't find the container with id bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9 Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.850030 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:29:27 crc kubenswrapper[4984]: W0130 10:29:27.854440 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c89dde7_c492_44dd_b36c_571540039b30.slice/crio-ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137 WatchSource:0}: Error finding container ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137: Status 404 returned error can't find the container with id ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137 Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.928684 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:29:27 crc kubenswrapper[4984]: W0130 10:29:27.985376 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dd7bd77_9e19_4ad1_9711_e0290f74afa8.slice/crio-a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577 WatchSource:0}: Error finding container a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577: Status 404 returned error can't find the container with id a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577 Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.088112 4984 generic.go:334] "Generic (PLEG): container finished" podID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" containerID="990b9baffd84708013a7a3ee4aa2247425d308cfa8107b4fdee81cf4fe0b11dc" exitCode=0 Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.088217 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6746-account-create-update-clg9v" event={"ID":"dd118357-c4bf-43ef-a738-9fcd6b07aac4","Type":"ContainerDied","Data":"990b9baffd84708013a7a3ee4aa2247425d308cfa8107b4fdee81cf4fe0b11dc"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.088315 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6746-account-create-update-clg9v" event={"ID":"dd118357-c4bf-43ef-a738-9fcd6b07aac4","Type":"ContainerStarted","Data":"40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.092328 4984 generic.go:334] "Generic (PLEG): container finished" podID="e83eb734-fae0-40ac-85db-8f8c8fb26133" containerID="70e9112a74a7aadc96357a6c30b6f274f66b33e88559a27a17cb48d3251c7fbb" exitCode=0 Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.107956 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f26c-account-create-update-7p7pm" event={"ID":"0dd7bd77-9e19-4ad1-9711-e0290f74afa8","Type":"ContainerStarted","Data":"a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.108032 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4q2ws" event={"ID":"4c89dde7-c492-44dd-b36c-571540039b30","Type":"ContainerStarted","Data":"ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.108074 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jjssn" event={"ID":"e83eb734-fae0-40ac-85db-8f8c8fb26133","Type":"ContainerDied","Data":"70e9112a74a7aadc96357a6c30b6f274f66b33e88559a27a17cb48d3251c7fbb"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.108094 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jjssn" event={"ID":"e83eb734-fae0-40ac-85db-8f8c8fb26133","Type":"ContainerStarted","Data":"bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9"} Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.103619 4984 generic.go:334] "Generic (PLEG): container finished" podID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" containerID="b43c0631539e8d8618d4ae2280e84e9cef0ad9ab61a9b8d7dfd994b58ac2994b" exitCode=0 Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.103706 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f26c-account-create-update-7p7pm" event={"ID":"0dd7bd77-9e19-4ad1-9711-e0290f74afa8","Type":"ContainerDied","Data":"b43c0631539e8d8618d4ae2280e84e9cef0ad9ab61a9b8d7dfd994b58ac2994b"} Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.113266 4984 generic.go:334] "Generic (PLEG): container finished" podID="4c89dde7-c492-44dd-b36c-571540039b30" containerID="73182d3db897a608122b23320455311eced5f1e7bb5cd0d6aaf0f4d8d9abd5cb" exitCode=0 Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.113443 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4q2ws" event={"ID":"4c89dde7-c492-44dd-b36c-571540039b30","Type":"ContainerDied","Data":"73182d3db897a608122b23320455311eced5f1e7bb5cd0d6aaf0f4d8d9abd5cb"} Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.198214 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.326215 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.326466 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-72z27" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" containerID="cri-o://c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" gracePeriod=10 Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.328465 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.395967 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.397490 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.446361 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495841 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495884 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495932 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495972 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495993 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.590564 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-72z27" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601021 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601075 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601140 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601161 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.602008 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.602593 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.603303 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.603780 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.688215 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.795218 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.806681 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") pod \"e83eb734-fae0-40ac-85db-8f8c8fb26133\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.806764 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") pod \"e83eb734-fae0-40ac-85db-8f8c8fb26133\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.807644 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e83eb734-fae0-40ac-85db-8f8c8fb26133" (UID: "e83eb734-fae0-40ac-85db-8f8c8fb26133"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.811945 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk" (OuterVolumeSpecName: "kube-api-access-wsklk") pod "e83eb734-fae0-40ac-85db-8f8c8fb26133" (UID: "e83eb734-fae0-40ac-85db-8f8c8fb26133"). InnerVolumeSpecName "kube-api-access-wsklk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.877424 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.881372 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.908157 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") pod \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.908278 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") pod \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.908669 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.908688 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.910912 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd118357-c4bf-43ef-a738-9fcd6b07aac4" (UID: "dd118357-c4bf-43ef-a738-9fcd6b07aac4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.937590 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p" (OuterVolumeSpecName: "kube-api-access-nwk8p") pod "dd118357-c4bf-43ef-a738-9fcd6b07aac4" (UID: "dd118357-c4bf-43ef-a738-9fcd6b07aac4"). InnerVolumeSpecName "kube-api-access-nwk8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.010220 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.010727 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.058330 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119582 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119707 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119797 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119847 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119900 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.131020 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t" (OuterVolumeSpecName: "kube-api-access-tw55t") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "kube-api-access-tw55t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.138697 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jjssn" event={"ID":"e83eb734-fae0-40ac-85db-8f8c8fb26133","Type":"ContainerDied","Data":"bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9"} Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.138742 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.138808 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.154931 4984 generic.go:334] "Generic (PLEG): container finished" podID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerID="c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" exitCode=0 Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.155022 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.155062 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerDied","Data":"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934"} Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.155098 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerDied","Data":"619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46"} Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.155122 4984 scope.go:117] "RemoveContainer" containerID="c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.165619 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.166805 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6746-account-create-update-clg9v" event={"ID":"dd118357-c4bf-43ef-a738-9fcd6b07aac4","Type":"ContainerDied","Data":"40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265"} Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.166845 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.188474 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config" (OuterVolumeSpecName: "config") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.194718 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.201786 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.222473 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.222507 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.222521 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.222536 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.229065 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.269672 4984 scope.go:117] "RemoveContainer" containerID="524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.305889 4984 scope.go:117] "RemoveContainer" containerID="c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.308883 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934\": container with ID starting with c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934 not found: ID does not exist" containerID="c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.308929 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934"} err="failed to get container status \"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934\": rpc error: code = NotFound desc = could not find container \"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934\": container with ID starting with c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934 not found: ID does not exist" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.308963 4984 scope.go:117] "RemoveContainer" containerID="524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.309346 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d\": container with ID starting with 524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d not found: ID does not exist" containerID="524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.309390 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d"} err="failed to get container status \"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d\": rpc error: code = NotFound desc = could not find container \"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d\": container with ID starting with 524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d not found: ID does not exist" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.324647 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.370994 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:29:30 crc kubenswrapper[4984]: W0130 10:29:30.371628 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3333aa79_f6c6_4ae8_9b45_233127846dff.slice/crio-855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec WatchSource:0}: Error finding container 855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec: Status 404 returned error can't find the container with id 855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.598124 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.601358 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.604044 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.629268 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") pod \"4c89dde7-c492-44dd-b36c-571540039b30\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.629305 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") pod \"4c89dde7-c492-44dd-b36c-571540039b30\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.631419 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.632071 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c89dde7-c492-44dd-b36c-571540039b30" (UID: "4c89dde7-c492-44dd-b36c-571540039b30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633630 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="init" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633655 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="init" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633690 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83eb734-fae0-40ac-85db-8f8c8fb26133" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633697 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83eb734-fae0-40ac-85db-8f8c8fb26133" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633712 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633718 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633729 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c89dde7-c492-44dd-b36c-571540039b30" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633735 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c89dde7-c492-44dd-b36c-571540039b30" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633753 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" containerName="mariadb-account-create-update" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633760 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" containerName="mariadb-account-create-update" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633900 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" containerName="mariadb-account-create-update" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633914 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e83eb734-fae0-40ac-85db-8f8c8fb26133" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633923 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c89dde7-c492-44dd-b36c-571540039b30" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633935 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.638402 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.641061 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.641313 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dcrnm" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.641733 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.641913 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.655405 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j" (OuterVolumeSpecName: "kube-api-access-s4s2j") pod "4c89dde7-c492-44dd-b36c-571540039b30" (UID: "4c89dde7-c492-44dd-b36c-571540039b30"). InnerVolumeSpecName "kube-api-access-s4s2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.660872 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.700495 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.730710 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") pod \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.730814 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") pod \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.732609 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0dd7bd77-9e19-4ad1-9711-e0290f74afa8" (UID: "0dd7bd77-9e19-4ad1-9711-e0290f74afa8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735167 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735290 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqw5c\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-kube-api-access-pqw5c\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735328 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-lock\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735398 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735422 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-cache\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735582 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735729 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735749 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735761 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.736867 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv" (OuterVolumeSpecName: "kube-api-access-w8kzv") pod "0dd7bd77-9e19-4ad1-9711-e0290f74afa8" (UID: "0dd7bd77-9e19-4ad1-9711-e0290f74afa8"). InnerVolumeSpecName "kube-api-access-w8kzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837406 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837455 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-cache\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837532 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837562 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837598 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqw5c\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-kube-api-access-pqw5c\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837618 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-lock\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837658 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.839159 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.839193 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.839204 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.839670 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:31.339238939 +0000 UTC m=+1075.905542763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.840507 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-lock\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.840780 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-cache\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.845664 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.856940 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqw5c\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-kube-api-access-pqw5c\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.877711 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.119108 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.119885 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" containerName="mariadb-account-create-update" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.119916 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" containerName="mariadb-account-create-update" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.120145 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" containerName="mariadb-account-create-update" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.120778 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.122875 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.125793 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.128038 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142188 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142374 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142455 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142496 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142538 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142563 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142629 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.155051 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.155813 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-ztr94 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-bww49" podUID="b37def03-aa60-444f-b361-08f97aa07211" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.164591 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-j9rvs"] Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.165607 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.176435 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.185439 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f26c-account-create-update-7p7pm" event={"ID":"0dd7bd77-9e19-4ad1-9711-e0290f74afa8","Type":"ContainerDied","Data":"a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577"} Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.185479 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.185535 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.188502 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4q2ws" event={"ID":"4c89dde7-c492-44dd-b36c-571540039b30","Type":"ContainerDied","Data":"ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137"} Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.188545 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.188630 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.190419 4984 generic.go:334] "Generic (PLEG): container finished" podID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerID="f04515d06093bea0006457a33fcd2dff143369d8a73d4cfd520b13fb1b93624f" exitCode=0 Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.190487 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.191659 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerDied","Data":"f04515d06093bea0006457a33fcd2dff143369d8a73d4cfd520b13fb1b93624f"} Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.191693 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerStarted","Data":"855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec"} Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.192686 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j9rvs"] Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.207640 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243676 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243736 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243766 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243802 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243862 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243893 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243924 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243951 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244001 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244031 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244074 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244119 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244191 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244223 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244720 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.245223 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.245613 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.248122 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.248361 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.249766 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.266131 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345580 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345683 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345720 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345766 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345804 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345962 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345988 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346179 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346230 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346269 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346299 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346355 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346402 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346420 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346442 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346840 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts" (OuterVolumeSpecName: "scripts") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.347067 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.347662 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.347746 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.348052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.348509 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.348529 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.348579 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:32.348562544 +0000 UTC m=+1076.914866458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.349182 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.350866 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.350872 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.351672 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94" (OuterVolumeSpecName: "kube-api-access-ztr94") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "kube-api-access-ztr94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.352446 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.353469 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.356442 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.360964 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.371127 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.448656 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449068 4984 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449205 4984 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449414 4984 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449540 4984 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449669 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449792 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.491784 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.009780 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j9rvs"] Jan 30 10:29:32 crc kubenswrapper[4984]: W0130 10:29:32.014462 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cfe4feb_b1bb_4904_9955_c5833ef34e9e.slice/crio-fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3 WatchSource:0}: Error finding container fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3: Status 404 returned error can't find the container with id fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3 Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.099739 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" path="/var/lib/kubelet/pods/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa/volumes" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.199128 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerStarted","Data":"ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0"} Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.199856 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.200225 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j9rvs" event={"ID":"7cfe4feb-b1bb-4904-9955-c5833ef34e9e","Type":"ContainerStarted","Data":"fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3"} Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.200296 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.228725 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" podStartSLOduration=3.22868238 podStartE2EDuration="3.22868238s" podCreationTimestamp="2026-01-30 10:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:32.224044224 +0000 UTC m=+1076.790348058" watchObservedRunningTime="2026-01-30 10:29:32.22868238 +0000 UTC m=+1076.794986204" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.293658 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.307859 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.328832 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.329883 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.335106 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.366033 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.366099 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.366156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:32 crc kubenswrapper[4984]: E0130 10:29:32.366278 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:32 crc kubenswrapper[4984]: E0130 10:29:32.366290 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:32 crc kubenswrapper[4984]: E0130 10:29:32.366325 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:34.366311749 +0000 UTC m=+1078.932615573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.397974 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.398915 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.400783 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.407051 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.468316 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.468411 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.468584 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.468638 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.469124 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.491915 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.569978 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.570122 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.570994 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.585344 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.645785 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.711288 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.202840 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:29:33 crc kubenswrapper[4984]: W0130 10:29:33.212760 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c0dd46_b897_468f_87a0_a335dd8fd6d5.slice/crio-d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3 WatchSource:0}: Error finding container d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3: Status 404 returned error can't find the container with id d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3 Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.289624 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:29:33 crc kubenswrapper[4984]: W0130 10:29:33.293602 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod849571b4_26bb_4853_af9c_f717967dea41.slice/crio-c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f WatchSource:0}: Error finding container c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f: Status 404 returned error can't find the container with id c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.985623 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.987229 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.991693 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:33.998335 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:33.998695 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:33.999116 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.100194 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.100400 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.100907 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37def03-aa60-444f-b361-08f97aa07211" path="/var/lib/kubelet/pods/b37def03-aa60-444f-b361-08f97aa07211/volumes" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.101493 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.123610 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.218894 4984 generic.go:334] "Generic (PLEG): container finished" podID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" containerID="4e36e53c2881a6f73654429fc80824078411a297a7acc1ff57eb163eb773e0f9" exitCode=0 Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.218989 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mwcqt" event={"ID":"83c0dd46-b897-468f-87a0-a335dd8fd6d5","Type":"ContainerDied","Data":"4e36e53c2881a6f73654429fc80824078411a297a7acc1ff57eb163eb773e0f9"} Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.219306 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mwcqt" event={"ID":"83c0dd46-b897-468f-87a0-a335dd8fd6d5","Type":"ContainerStarted","Data":"d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3"} Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.220924 4984 generic.go:334] "Generic (PLEG): container finished" podID="849571b4-26bb-4853-af9c-f717967dea41" containerID="3be32fd131009048bc81a0d4461ef13892f209f53fa5bcf3e5c232baa45cfcc2" exitCode=0 Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.221927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-064a-account-create-update-8lxkv" event={"ID":"849571b4-26bb-4853-af9c-f717967dea41","Type":"ContainerDied","Data":"3be32fd131009048bc81a0d4461ef13892f209f53fa5bcf3e5c232baa45cfcc2"} Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.221963 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-064a-account-create-update-8lxkv" event={"ID":"849571b4-26bb-4853-af9c-f717967dea41","Type":"ContainerStarted","Data":"c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f"} Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.324551 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.404898 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:34 crc kubenswrapper[4984]: E0130 10:29:34.405148 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:34 crc kubenswrapper[4984]: E0130 10:29:34.405189 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:34 crc kubenswrapper[4984]: E0130 10:29:34.405308 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:38.405230891 +0000 UTC m=+1082.971534715 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.077932 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.085445 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.135873 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") pod \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.135928 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") pod \"849571b4-26bb-4853-af9c-f717967dea41\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.135950 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") pod \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.136005 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") pod \"849571b4-26bb-4853-af9c-f717967dea41\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.138558 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83c0dd46-b897-468f-87a0-a335dd8fd6d5" (UID: "83c0dd46-b897-468f-87a0-a335dd8fd6d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.138617 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "849571b4-26bb-4853-af9c-f717967dea41" (UID: "849571b4-26bb-4853-af9c-f717967dea41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.143415 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn" (OuterVolumeSpecName: "kube-api-access-5whqn") pod "849571b4-26bb-4853-af9c-f717967dea41" (UID: "849571b4-26bb-4853-af9c-f717967dea41"). InnerVolumeSpecName "kube-api-access-5whqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.144653 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd" (OuterVolumeSpecName: "kube-api-access-j2bcd") pod "83c0dd46-b897-468f-87a0-a335dd8fd6d5" (UID: "83c0dd46-b897-468f-87a0-a335dd8fd6d5"). InnerVolumeSpecName "kube-api-access-j2bcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.238181 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.238522 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.238536 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.238548 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.243597 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.244475 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mwcqt" event={"ID":"83c0dd46-b897-468f-87a0-a335dd8fd6d5","Type":"ContainerDied","Data":"d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3"} Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.244542 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.244496 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.246935 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-064a-account-create-update-8lxkv" event={"ID":"849571b4-26bb-4853-af9c-f717967dea41","Type":"ContainerDied","Data":"c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f"} Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.246980 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.246981 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:36 crc kubenswrapper[4984]: W0130 10:29:36.249629 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd4914ea_6a7b_47c6_abbd_b2a0a067361d.slice/crio-c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177 WatchSource:0}: Error finding container c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177: Status 404 returned error can't find the container with id c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177 Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.256327 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.261624 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j9rvs" event={"ID":"7cfe4feb-b1bb-4904-9955-c5833ef34e9e","Type":"ContainerStarted","Data":"77e36f10450b6786e128bf55e10097bc7a62dfbf1fdd1101184e36a5286381a6"} Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.267990 4984 generic.go:334] "Generic (PLEG): container finished" podID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" containerID="0789f4290dbcaeca5700757294aca052563ba0644765c2738bb82c817de460e2" exitCode=0 Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.268050 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wr78c" event={"ID":"dd4914ea-6a7b-47c6-abbd-b2a0a067361d","Type":"ContainerDied","Data":"0789f4290dbcaeca5700757294aca052563ba0644765c2738bb82c817de460e2"} Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.268082 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wr78c" event={"ID":"dd4914ea-6a7b-47c6-abbd-b2a0a067361d","Type":"ContainerStarted","Data":"c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177"} Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.288722 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-j9rvs" podStartSLOduration=2.468319862 podStartE2EDuration="6.288700648s" podCreationTimestamp="2026-01-30 10:29:31 +0000 UTC" firstStartedPulling="2026-01-30 10:29:32.017618996 +0000 UTC m=+1076.583922820" lastFinishedPulling="2026-01-30 10:29:35.837999782 +0000 UTC m=+1080.404303606" observedRunningTime="2026-01-30 10:29:37.286566891 +0000 UTC m=+1081.852870755" watchObservedRunningTime="2026-01-30 10:29:37.288700648 +0000 UTC m=+1081.855004472" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.560524 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:29:37 crc kubenswrapper[4984]: E0130 10:29:37.560905 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849571b4-26bb-4853-af9c-f717967dea41" containerName="mariadb-account-create-update" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.560922 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="849571b4-26bb-4853-af9c-f717967dea41" containerName="mariadb-account-create-update" Jan 30 10:29:37 crc kubenswrapper[4984]: E0130 10:29:37.560934 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" containerName="mariadb-database-create" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.560941 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" containerName="mariadb-database-create" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.561111 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="849571b4-26bb-4853-af9c-f717967dea41" containerName="mariadb-account-create-update" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.561134 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" containerName="mariadb-database-create" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.561680 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.564021 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.564197 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-94rmf" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.579593 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.663437 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.663506 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.663532 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.663583 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.764706 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.764779 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.764803 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.764848 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.770386 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.770850 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.774370 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.799789 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.879913 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.482495 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:38 crc kubenswrapper[4984]: E0130 10:29:38.483191 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:38 crc kubenswrapper[4984]: E0130 10:29:38.483209 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:38 crc kubenswrapper[4984]: E0130 10:29:38.483282 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:46.483260901 +0000 UTC m=+1091.049564725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.498947 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:29:38 crc kubenswrapper[4984]: W0130 10:29:38.563236 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfce8525_20d3_4c57_9638_37a46571c375.slice/crio-263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d WatchSource:0}: Error finding container 263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d: Status 404 returned error can't find the container with id 263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.645041 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.687674 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") pod \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.687908 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") pod \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.688527 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd4914ea-6a7b-47c6-abbd-b2a0a067361d" (UID: "dd4914ea-6a7b-47c6-abbd-b2a0a067361d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.696989 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq" (OuterVolumeSpecName: "kube-api-access-zlwlq") pod "dd4914ea-6a7b-47c6-abbd-b2a0a067361d" (UID: "dd4914ea-6a7b-47c6-abbd-b2a0a067361d"). InnerVolumeSpecName "kube-api-access-zlwlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.789945 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.789986 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.285950 4984 generic.go:334] "Generic (PLEG): container finished" podID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerID="627e3b8cc5def8235dcb65072da12abbb346c0ddb7f3ece2aa1c597e5e7a4e73" exitCode=0 Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.286023 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerDied","Data":"627e3b8cc5def8235dcb65072da12abbb346c0ddb7f3ece2aa1c597e5e7a4e73"} Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.288184 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wr78c" event={"ID":"dd4914ea-6a7b-47c6-abbd-b2a0a067361d","Type":"ContainerDied","Data":"c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177"} Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.288215 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.288279 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.290049 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v95fj" event={"ID":"bfce8525-20d3-4c57-9638-37a46571c375","Type":"ContainerStarted","Data":"263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d"} Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.291852 4984 generic.go:334] "Generic (PLEG): container finished" podID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerID="f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48" exitCode=0 Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.291907 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerDied","Data":"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48"} Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.842875 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.883472 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.947168 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.947484 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="dnsmasq-dns" containerID="cri-o://2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4" gracePeriod=10 Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.308448 4984 generic.go:334] "Generic (PLEG): container finished" podID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerID="2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4" exitCode=0 Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.308507 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerDied","Data":"2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4"} Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.310418 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerStarted","Data":"53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a"} Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.310589 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.323570 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerStarted","Data":"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83"} Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.323882 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.336528 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.704604969 podStartE2EDuration="58.336507716s" podCreationTimestamp="2026-01-30 10:28:42 +0000 UTC" firstStartedPulling="2026-01-30 10:28:49.479167662 +0000 UTC m=+1034.045471486" lastFinishedPulling="2026-01-30 10:29:05.111070409 +0000 UTC m=+1049.677374233" observedRunningTime="2026-01-30 10:29:40.33441277 +0000 UTC m=+1084.900716624" watchObservedRunningTime="2026-01-30 10:29:40.336507716 +0000 UTC m=+1084.902811550" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.367767 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.49467955 podStartE2EDuration="58.36774983s" podCreationTimestamp="2026-01-30 10:28:42 +0000 UTC" firstStartedPulling="2026-01-30 10:28:44.220227688 +0000 UTC m=+1028.786531512" lastFinishedPulling="2026-01-30 10:29:05.093297958 +0000 UTC m=+1049.659601792" observedRunningTime="2026-01-30 10:29:40.357978406 +0000 UTC m=+1084.924282230" watchObservedRunningTime="2026-01-30 10:29:40.36774983 +0000 UTC m=+1084.934053664" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.421870 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.441684 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.483304 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.624447 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") pod \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.624538 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") pod \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.624624 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") pod \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.624696 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") pod \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.635799 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q" (OuterVolumeSpecName: "kube-api-access-sp28q") pod "364f1e33-f14a-4248-82d5-eca3ab3e36c3" (UID: "364f1e33-f14a-4248-82d5-eca3ab3e36c3"). InnerVolumeSpecName "kube-api-access-sp28q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.666173 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config" (OuterVolumeSpecName: "config") pod "364f1e33-f14a-4248-82d5-eca3ab3e36c3" (UID: "364f1e33-f14a-4248-82d5-eca3ab3e36c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.674606 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "364f1e33-f14a-4248-82d5-eca3ab3e36c3" (UID: "364f1e33-f14a-4248-82d5-eca3ab3e36c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.679794 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "364f1e33-f14a-4248-82d5-eca3ab3e36c3" (UID: "364f1e33-f14a-4248-82d5-eca3ab3e36c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.726977 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.727003 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.727012 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.727020 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.343379 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.343073 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerDied","Data":"6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c"} Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.355588 4984 scope.go:117] "RemoveContainer" containerID="2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4" Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.383341 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.391069 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.400497 4984 scope.go:117] "RemoveContainer" containerID="f6e1b34b25853d4897e5be9605b68bf7a00d7cdea46fe63f164fe4950f791a05" Jan 30 10:29:42 crc kubenswrapper[4984]: I0130 10:29:42.101094 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" path="/var/lib/kubelet/pods/364f1e33-f14a-4248-82d5-eca3ab3e36c3/volumes" Jan 30 10:29:42 crc kubenswrapper[4984]: I0130 10:29:42.102919 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" path="/var/lib/kubelet/pods/dd4914ea-6a7b-47c6-abbd-b2a0a067361d/volumes" Jan 30 10:29:43 crc kubenswrapper[4984]: I0130 10:29:43.363746 4984 generic.go:334] "Generic (PLEG): container finished" podID="7cfe4feb-b1bb-4904-9955-c5833ef34e9e" containerID="77e36f10450b6786e128bf55e10097bc7a62dfbf1fdd1101184e36a5286381a6" exitCode=0 Jan 30 10:29:43 crc kubenswrapper[4984]: I0130 10:29:43.363833 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j9rvs" event={"ID":"7cfe4feb-b1bb-4904-9955-c5833ef34e9e","Type":"ContainerDied","Data":"77e36f10450b6786e128bf55e10097bc7a62dfbf1fdd1101184e36a5286381a6"} Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.003509 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:44 crc kubenswrapper[4984]: E0130 10:29:44.003906 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" containerName="mariadb-account-create-update" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.003929 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" containerName="mariadb-account-create-update" Jan 30 10:29:44 crc kubenswrapper[4984]: E0130 10:29:44.003952 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="dnsmasq-dns" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.003959 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="dnsmasq-dns" Jan 30 10:29:44 crc kubenswrapper[4984]: E0130 10:29:44.003976 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="init" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.003983 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="init" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.004185 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="dnsmasq-dns" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.004208 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" containerName="mariadb-account-create-update" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.005745 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.008563 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.030208 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.085570 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.085666 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.187503 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.187565 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.188264 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.221059 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.336624 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:46 crc kubenswrapper[4984]: I0130 10:29:46.523142 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:46 crc kubenswrapper[4984]: I0130 10:29:46.541233 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:46 crc kubenswrapper[4984]: I0130 10:29:46.613116 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.575609 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.579370 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.585236 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-m4spx" podUID="63184ee8-263b-4506-8844-4ae4fd2a80c7" containerName="ovn-controller" probeResult="failure" output=< Jan 30 10:29:47 crc kubenswrapper[4984]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 10:29:47 crc kubenswrapper[4984]: > Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.812139 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.813991 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.815849 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.828856 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.854942 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.855013 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.855043 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.856154 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.856283 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.856348 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957557 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957605 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957626 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957689 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957733 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957765 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.958506 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.958522 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.958600 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.958807 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.961712 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.979184 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:48 crc kubenswrapper[4984]: I0130 10:29:48.131526 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.019831 4984 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf42e13a3-aadb-4dc7-aabb-5a769e2b0e2d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf42e13a3-aadb-4dc7-aabb-5a769e2b0e2d] : Timed out while waiting for systemd to remove kubepods-besteffort-podf42e13a3_aadb_4dc7_aabb_5a769e2b0e2d.slice" Jan 30 10:29:50 crc kubenswrapper[4984]: E0130 10:29:50.019888 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podf42e13a3-aadb-4dc7-aabb-5a769e2b0e2d] : unable to destroy cgroup paths for cgroup [kubepods besteffort podf42e13a3-aadb-4dc7-aabb-5a769e2b0e2d] : Timed out while waiting for systemd to remove kubepods-besteffort-podf42e13a3_aadb_4dc7_aabb_5a769e2b0e2d.slice" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" podUID="f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.354822 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.400122 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.400193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.400292 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.402466 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.402602 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.402653 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.402697 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.404210 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.404892 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.406533 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf" (OuterVolumeSpecName: "kube-api-access-lrqkf") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "kube-api-access-lrqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.415743 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432573 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts" (OuterVolumeSpecName: "scripts") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432676 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432689 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432730 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j9rvs" event={"ID":"7cfe4feb-b1bb-4904-9955-c5833ef34e9e","Type":"ContainerDied","Data":"fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3"} Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432758 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.436382 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.446881 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505426 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505673 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505741 4984 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505796 4984 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505858 4984 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505921 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.506028 4984 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505871 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.518737 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.847163 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.882348 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.891155 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:50 crc kubenswrapper[4984]: W0130 10:29:50.901735 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod758d234e_dcc5_4555_9403_6afac762f662.slice/crio-b01c6d72b98eb84f2ae223dfc17866cf11304953dd264263427d25bc152586d0 WatchSource:0}: Error finding container b01c6d72b98eb84f2ae223dfc17866cf11304953dd264263427d25bc152586d0: Status 404 returned error can't find the container with id b01c6d72b98eb84f2ae223dfc17866cf11304953dd264263427d25bc152586d0 Jan 30 10:29:50 crc kubenswrapper[4984]: W0130 10:29:50.904842 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b286d6_b58f_4d49_ae49_e3acdc77b7f5.slice/crio-6dfbc5fa7a23be4904d742b7ff012afd21bda4bf23046fff0874658209a6bb59 WatchSource:0}: Error finding container 6dfbc5fa7a23be4904d742b7ff012afd21bda4bf23046fff0874658209a6bb59: Status 404 returned error can't find the container with id 6dfbc5fa7a23be4904d742b7ff012afd21bda4bf23046fff0874658209a6bb59 Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.446979 4984 generic.go:334] "Generic (PLEG): container finished" podID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" containerID="f92bcc7f529c6d27eac4218b5f51170e776604565fbe8022a9769f8c3f32b9e1" exitCode=0 Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.447287 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9mg8c" event={"ID":"f8da6f39-d290-44c4-93cd-0b2fcc37e01c","Type":"ContainerDied","Data":"f92bcc7f529c6d27eac4218b5f51170e776604565fbe8022a9769f8c3f32b9e1"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.447484 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9mg8c" event={"ID":"f8da6f39-d290-44c4-93cd-0b2fcc37e01c","Type":"ContainerStarted","Data":"4ffe4213c6bb43b1ddfaa847d935f4f89ebdbc056f9c867b6ca149dde03bb34b"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.450894 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v95fj" event={"ID":"bfce8525-20d3-4c57-9638-37a46571c375","Type":"ContainerStarted","Data":"b2c5eedb1976c1f88ba872ebef95c16d2cb8d47db5e197de1d5f09d25aea4f90"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.452416 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"6dfbc5fa7a23be4904d742b7ff012afd21bda4bf23046fff0874658209a6bb59"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.454109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx-config-8pbt4" event={"ID":"758d234e-dcc5-4555-9403-6afac762f662","Type":"ContainerStarted","Data":"4ab9d6fed7ef2d8e83d431ecc5534b8ea6adabfca7f780aafcd9c78c76b8680c"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.454238 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx-config-8pbt4" event={"ID":"758d234e-dcc5-4555-9403-6afac762f662","Type":"ContainerStarted","Data":"b01c6d72b98eb84f2ae223dfc17866cf11304953dd264263427d25bc152586d0"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.490467 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-m4spx-config-8pbt4" podStartSLOduration=4.490445614 podStartE2EDuration="4.490445614s" podCreationTimestamp="2026-01-30 10:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:51.48290781 +0000 UTC m=+1096.049211654" watchObservedRunningTime="2026-01-30 10:29:51.490445614 +0000 UTC m=+1096.056749438" Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.498571 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-v95fj" podStartSLOduration=2.7357691109999998 podStartE2EDuration="14.498552613s" podCreationTimestamp="2026-01-30 10:29:37 +0000 UTC" firstStartedPulling="2026-01-30 10:29:38.566519431 +0000 UTC m=+1083.132823255" lastFinishedPulling="2026-01-30 10:29:50.329302923 +0000 UTC m=+1094.895606757" observedRunningTime="2026-01-30 10:29:51.496314102 +0000 UTC m=+1096.062617936" watchObservedRunningTime="2026-01-30 10:29:51.498552613 +0000 UTC m=+1096.064856427" Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.098755 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" path="/var/lib/kubelet/pods/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d/volumes" Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.467319 4984 generic.go:334] "Generic (PLEG): container finished" podID="758d234e-dcc5-4555-9403-6afac762f662" containerID="4ab9d6fed7ef2d8e83d431ecc5534b8ea6adabfca7f780aafcd9c78c76b8680c" exitCode=0 Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.467418 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx-config-8pbt4" event={"ID":"758d234e-dcc5-4555-9403-6afac762f662","Type":"ContainerDied","Data":"4ab9d6fed7ef2d8e83d431ecc5534b8ea6adabfca7f780aafcd9c78c76b8680c"} Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.603391 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-m4spx" Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.980537 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.057817 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") pod \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.057868 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") pod \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.058708 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8da6f39-d290-44c4-93cd-0b2fcc37e01c" (UID: "f8da6f39-d290-44c4-93cd-0b2fcc37e01c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.061036 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp" (OuterVolumeSpecName: "kube-api-access-pccfp") pod "f8da6f39-d290-44c4-93cd-0b2fcc37e01c" (UID: "f8da6f39-d290-44c4-93cd-0b2fcc37e01c"). InnerVolumeSpecName "kube-api-access-pccfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.161286 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.161329 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.478327 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9mg8c" event={"ID":"f8da6f39-d290-44c4-93cd-0b2fcc37e01c","Type":"ContainerDied","Data":"4ffe4213c6bb43b1ddfaa847d935f4f89ebdbc056f9c867b6ca149dde03bb34b"} Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.480026 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffe4213c6bb43b1ddfaa847d935f4f89ebdbc056f9c867b6ca149dde03bb34b" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.478360 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.486862 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"740d26b17d9ecc7d033ef1e735065ca6f39e638b98e546db4c60c2bd674ddcce"} Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.487105 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"0947a163fb316789a29764a1ead398a92cdde63adb51fd659a482380791f84ff"} Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.701429 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.756623 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.871387 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.871476 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872422 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts" (OuterVolumeSpecName: "scripts") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872529 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872640 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872671 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872747 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873407 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873464 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873825 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873859 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873880 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run" (OuterVolumeSpecName: "var-run") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.877586 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v" (OuterVolumeSpecName: "kube-api-access-c2m9v") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "kube-api-access-c2m9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974616 4984 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974672 4984 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974684 4984 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974695 4984 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974707 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.020240 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.027453 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.039036 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.045872 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:29:54 crc kubenswrapper[4984]: E0130 10:29:54.046221 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfe4feb-b1bb-4904-9955-c5833ef34e9e" containerName="swift-ring-rebalance" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046240 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfe4feb-b1bb-4904-9955-c5833ef34e9e" containerName="swift-ring-rebalance" Jan 30 10:29:54 crc kubenswrapper[4984]: E0130 10:29:54.046269 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758d234e-dcc5-4555-9403-6afac762f662" containerName="ovn-config" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046276 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="758d234e-dcc5-4555-9403-6afac762f662" containerName="ovn-config" Jan 30 10:29:54 crc kubenswrapper[4984]: E0130 10:29:54.046291 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" containerName="mariadb-account-create-update" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046298 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" containerName="mariadb-account-create-update" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046462 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfe4feb-b1bb-4904-9955-c5833ef34e9e" containerName="swift-ring-rebalance" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046474 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="758d234e-dcc5-4555-9403-6afac762f662" containerName="ovn-config" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046490 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" containerName="mariadb-account-create-update" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.047118 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.058281 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.077213 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.077325 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.122809 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="758d234e-dcc5-4555-9403-6afac762f662" path="/var/lib/kubelet/pods/758d234e-dcc5-4555-9403-6afac762f662/volumes" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.141105 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.143464 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.173754 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.180058 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.180167 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.181077 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.181092 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.182761 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.194516 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.200146 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.218793 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.256998 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.258341 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.260793 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.267599 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.281484 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.281555 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.281635 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.281696 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.340203 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.341275 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.343040 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nsrjn" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.343972 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.344148 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.344428 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.355859 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.364628 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383086 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383175 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383305 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383357 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383384 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383460 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383992 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.384491 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.400490 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.402087 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.445670 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.446862 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.464164 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.479657 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485278 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485338 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485390 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485510 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485535 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.488817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.516899 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"39cba512d1d3d48fc987b4a47e0907fd75abdd7985bb95833c667d3883d6b6ba"} Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.516955 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"912cf51779605e2b4fce3df3f69aad596f18f154299af844d168631b86214f42"} Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.518121 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.521650 4984 scope.go:117] "RemoveContainer" containerID="4ab9d6fed7ef2d8e83d431ecc5534b8ea6adabfca7f780aafcd9c78c76b8680c" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.521829 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.543141 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.544075 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.557724 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.557894 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.559896 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.579770 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586587 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586700 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586739 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586785 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586815 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.593321 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.594220 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.610937 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.660426 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.688720 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.688792 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.688846 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.688896 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.689582 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.709089 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.790381 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.792581 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.792824 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.793709 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.812392 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.881602 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.942190 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.012374 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.067076 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.104648 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.272359 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.383320 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.533961 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-622f-account-create-update-xxrl4" event={"ID":"1c6c0cd3-99cd-454e-8ceb-000141c59c2b","Type":"ContainerStarted","Data":"498b8053d9b5e3a916ccebe1006886752c4f6d7609924037fb495a51da3787d5"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.534013 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.534045 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-622f-account-create-update-xxrl4" event={"ID":"1c6c0cd3-99cd-454e-8ceb-000141c59c2b","Type":"ContainerStarted","Data":"bc4a3190c6737b7176126e0691adaa417a5283eba84a5b73be121b2b6188d4db"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.539122 4984 generic.go:334] "Generic (PLEG): container finished" podID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" containerID="2c715bd7c478626b0d30f0dcbe5f0fa4d9ddd3cebe540358d60fefd03ffbea4f" exitCode=0 Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.539189 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8d9e-account-create-update-pv4gq" event={"ID":"c4f293b1-64af-45c3-8ee1-b8df7efdde3e","Type":"ContainerDied","Data":"2c715bd7c478626b0d30f0dcbe5f0fa4d9ddd3cebe540358d60fefd03ffbea4f"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.539255 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8d9e-account-create-update-pv4gq" event={"ID":"c4f293b1-64af-45c3-8ee1-b8df7efdde3e","Type":"ContainerStarted","Data":"80ba735ba6e7de077f77b6d39e0d54cb7804bfa2f3e054f4623ac57daf82ec89"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.543774 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.545084 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-p7n6d" event={"ID":"d291ef2c-2cdb-47be-b508-efd4c8282791","Type":"ContainerStarted","Data":"65086de31b1aa439689527681ff638af7559dadfbbbe7fd2e976641d2933b6ce"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.545121 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-p7n6d" event={"ID":"d291ef2c-2cdb-47be-b508-efd4c8282791","Type":"ContainerStarted","Data":"354c4d1205f62964b6c1a29a854be88a05fd0d3d7efd2997db8c40286434d404"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.546804 4984 generic.go:334] "Generic (PLEG): container finished" podID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" containerID="636c0d411532393965dbc0c85c0755158f7ef4a0555bad562fe1e96ce9c7b1be" exitCode=0 Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.546845 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bhbll" event={"ID":"341b21ee-dc5c-48f9-9810-85d1af9b9de9","Type":"ContainerDied","Data":"636c0d411532393965dbc0c85c0755158f7ef4a0555bad562fe1e96ce9c7b1be"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.546863 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bhbll" event={"ID":"341b21ee-dc5c-48f9-9810-85d1af9b9de9","Type":"ContainerStarted","Data":"7645b092ecc2ba054183fe5327481ec543ff04782028f5de9de88330b0160cf4"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.549507 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.556755 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-622f-account-create-update-xxrl4" podStartSLOduration=1.5567343359999999 podStartE2EDuration="1.556734336s" podCreationTimestamp="2026-01-30 10:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:55.550537969 +0000 UTC m=+1100.116841793" watchObservedRunningTime="2026-01-30 10:29:55.556734336 +0000 UTC m=+1100.123038160" Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.577454 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-p7n6d" podStartSLOduration=1.577434796 podStartE2EDuration="1.577434796s" podCreationTimestamp="2026-01-30 10:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:55.566152121 +0000 UTC m=+1100.132455945" watchObservedRunningTime="2026-01-30 10:29:55.577434796 +0000 UTC m=+1100.143738620" Jan 30 10:29:55 crc kubenswrapper[4984]: W0130 10:29:55.713538 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26267a37_c8e7_45b3_af7f_8050a58cb697.slice/crio-1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca WatchSource:0}: Error finding container 1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca: Status 404 returned error can't find the container with id 1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca Jan 30 10:29:55 crc kubenswrapper[4984]: W0130 10:29:55.715728 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5490b62_8700_4c9c_b4f7_517c71f91c46.slice/crio-05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033 WatchSource:0}: Error finding container 05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033: Status 404 returned error can't find the container with id 05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033 Jan 30 10:29:55 crc kubenswrapper[4984]: W0130 10:29:55.717610 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c1d730_34f1_4912_a0e9_f19d10e9ec9b.slice/crio-8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5 WatchSource:0}: Error finding container 8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5: Status 404 returned error can't find the container with id 8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.118373 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" path="/var/lib/kubelet/pods/f8da6f39-d290-44c4-93cd-0b2fcc37e01c/volumes" Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.568057 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"765fc8a037b5105c4ecd45c14e6820f91f9d4493414014d4375a3af65079e680"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.568112 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"eb86366b30fc87a2dbae1fc21c1f057dc57f579f1f38e1b510294178a5410364"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.568131 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"673d9fd9bff276cb6981367eba1c808558ba7a19bfd44840ba924414dda09b8b"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.568144 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"14d22a96a0ac507fec174fe47c4bae47ec6431d02fc94a795eaa928c481c1855"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.569724 4984 generic.go:334] "Generic (PLEG): container finished" podID="26267a37-c8e7-45b3-af7f-8050a58cb697" containerID="e4a188d3d377fd9a910224b46c8bfca036c469e31163b866035741aa0bc79a21" exitCode=0 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.570147 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qtwt7" event={"ID":"26267a37-c8e7-45b3-af7f-8050a58cb697","Type":"ContainerDied","Data":"e4a188d3d377fd9a910224b46c8bfca036c469e31163b866035741aa0bc79a21"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.570188 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qtwt7" event={"ID":"26267a37-c8e7-45b3-af7f-8050a58cb697","Type":"ContainerStarted","Data":"1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.571273 4984 generic.go:334] "Generic (PLEG): container finished" podID="d291ef2c-2cdb-47be-b508-efd4c8282791" containerID="65086de31b1aa439689527681ff638af7559dadfbbbe7fd2e976641d2933b6ce" exitCode=0 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.571357 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-p7n6d" event={"ID":"d291ef2c-2cdb-47be-b508-efd4c8282791","Type":"ContainerDied","Data":"65086de31b1aa439689527681ff638af7559dadfbbbe7fd2e976641d2933b6ce"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.585583 4984 generic.go:334] "Generic (PLEG): container finished" podID="f5490b62-8700-4c9c-b4f7-517c71f91c46" containerID="f9f5f71df6bcff6e848630eab001a1a161d02735319888972af7604f9aa242ac" exitCode=0 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.585647 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-61ae-account-create-update-8l5nb" event={"ID":"f5490b62-8700-4c9c-b4f7-517c71f91c46","Type":"ContainerDied","Data":"f9f5f71df6bcff6e848630eab001a1a161d02735319888972af7604f9aa242ac"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.585672 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-61ae-account-create-update-8l5nb" event={"ID":"f5490b62-8700-4c9c-b4f7-517c71f91c46","Type":"ContainerStarted","Data":"05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.595775 4984 generic.go:334] "Generic (PLEG): container finished" podID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" containerID="498b8053d9b5e3a916ccebe1006886752c4f6d7609924037fb495a51da3787d5" exitCode=0 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.595982 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-622f-account-create-update-xxrl4" event={"ID":"1c6c0cd3-99cd-454e-8ceb-000141c59c2b","Type":"ContainerDied","Data":"498b8053d9b5e3a916ccebe1006886752c4f6d7609924037fb495a51da3787d5"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.609710 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-whl8p" event={"ID":"58c1d730-34f1-4912-a0e9-f19d10e9ec9b","Type":"ContainerStarted","Data":"8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5"} Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.006050 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.126955 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.128862 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") pod \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.128904 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") pod \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.133784 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "341b21ee-dc5c-48f9-9810-85d1af9b9de9" (UID: "341b21ee-dc5c-48f9-9810-85d1af9b9de9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.148639 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s" (OuterVolumeSpecName: "kube-api-access-w948s") pod "341b21ee-dc5c-48f9-9810-85d1af9b9de9" (UID: "341b21ee-dc5c-48f9-9810-85d1af9b9de9"). InnerVolumeSpecName "kube-api-access-w948s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.231673 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") pod \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.232476 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") pod \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.232984 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4f293b1-64af-45c3-8ee1-b8df7efdde3e" (UID: "c4f293b1-64af-45c3-8ee1-b8df7efdde3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.233317 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.233350 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.233368 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.234536 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5" (OuterVolumeSpecName: "kube-api-access-9b9m5") pod "c4f293b1-64af-45c3-8ee1-b8df7efdde3e" (UID: "c4f293b1-64af-45c3-8ee1-b8df7efdde3e"). InnerVolumeSpecName "kube-api-access-9b9m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.334688 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.622432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bhbll" event={"ID":"341b21ee-dc5c-48f9-9810-85d1af9b9de9","Type":"ContainerDied","Data":"7645b092ecc2ba054183fe5327481ec543ff04782028f5de9de88330b0160cf4"} Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.622474 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7645b092ecc2ba054183fe5327481ec543ff04782028f5de9de88330b0160cf4" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.622540 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.626801 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.627388 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8d9e-account-create-update-pv4gq" event={"ID":"c4f293b1-64af-45c3-8ee1-b8df7efdde3e","Type":"ContainerDied","Data":"80ba735ba6e7de077f77b6d39e0d54cb7804bfa2f3e054f4623ac57daf82ec89"} Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.627444 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ba735ba6e7de077f77b6d39e0d54cb7804bfa2f3e054f4623ac57daf82ec89" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.040037 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.145927 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") pod \"d291ef2c-2cdb-47be-b508-efd4c8282791\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.146024 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") pod \"d291ef2c-2cdb-47be-b508-efd4c8282791\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.148139 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d291ef2c-2cdb-47be-b508-efd4c8282791" (UID: "d291ef2c-2cdb-47be-b508-efd4c8282791"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.157442 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d" (OuterVolumeSpecName: "kube-api-access-5nn4d") pod "d291ef2c-2cdb-47be-b508-efd4c8282791" (UID: "d291ef2c-2cdb-47be-b508-efd4c8282791"). InnerVolumeSpecName "kube-api-access-5nn4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.195701 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.203230 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.211169 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.248489 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.248524 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.349932 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") pod \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350513 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") pod \"f5490b62-8700-4c9c-b4f7-517c71f91c46\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350555 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") pod \"f5490b62-8700-4c9c-b4f7-517c71f91c46\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350675 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") pod \"26267a37-c8e7-45b3-af7f-8050a58cb697\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350506 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c6c0cd3-99cd-454e-8ceb-000141c59c2b" (UID: "1c6c0cd3-99cd-454e-8ceb-000141c59c2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350762 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") pod \"26267a37-c8e7-45b3-af7f-8050a58cb697\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350850 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") pod \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.351102 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26267a37-c8e7-45b3-af7f-8050a58cb697" (UID: "26267a37-c8e7-45b3-af7f-8050a58cb697"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.351465 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.351488 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.351729 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5490b62-8700-4c9c-b4f7-517c71f91c46" (UID: "f5490b62-8700-4c9c-b4f7-517c71f91c46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.353810 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2" (OuterVolumeSpecName: "kube-api-access-g5sq2") pod "26267a37-c8e7-45b3-af7f-8050a58cb697" (UID: "26267a37-c8e7-45b3-af7f-8050a58cb697"). InnerVolumeSpecName "kube-api-access-g5sq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.354432 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b" (OuterVolumeSpecName: "kube-api-access-gjj4b") pod "f5490b62-8700-4c9c-b4f7-517c71f91c46" (UID: "f5490b62-8700-4c9c-b4f7-517c71f91c46"). InnerVolumeSpecName "kube-api-access-gjj4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.354467 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw" (OuterVolumeSpecName: "kube-api-access-7ckgw") pod "1c6c0cd3-99cd-454e-8ceb-000141c59c2b" (UID: "1c6c0cd3-99cd-454e-8ceb-000141c59c2b"). InnerVolumeSpecName "kube-api-access-7ckgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.453521 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.453553 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.453579 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.453589 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.637998 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-p7n6d" event={"ID":"d291ef2c-2cdb-47be-b508-efd4c8282791","Type":"ContainerDied","Data":"354c4d1205f62964b6c1a29a854be88a05fd0d3d7efd2997db8c40286434d404"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.638041 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354c4d1205f62964b6c1a29a854be88a05fd0d3d7efd2997db8c40286434d404" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.638096 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.648174 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-61ae-account-create-update-8l5nb" event={"ID":"f5490b62-8700-4c9c-b4f7-517c71f91c46","Type":"ContainerDied","Data":"05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.648217 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.648294 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.655168 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-622f-account-create-update-xxrl4" event={"ID":"1c6c0cd3-99cd-454e-8ceb-000141c59c2b","Type":"ContainerDied","Data":"bc4a3190c6737b7176126e0691adaa417a5283eba84a5b73be121b2b6188d4db"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.655208 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4a3190c6737b7176126e0691adaa417a5283eba84a5b73be121b2b6188d4db" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.655276 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.663308 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"48437608a96e028280723cc171cbf034f324bf38044abaf69122dd3bd5845435"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.663370 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"9750871e7ae5708670ee16c69b20ad76a8fbbd279ada870e92a75b6ca7bbac5f"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.665148 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qtwt7" event={"ID":"26267a37-c8e7-45b3-af7f-8050a58cb697","Type":"ContainerDied","Data":"1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.665221 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.665300 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033337 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033747 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5490b62-8700-4c9c-b4f7-517c71f91c46" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033770 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5490b62-8700-4c9c-b4f7-517c71f91c46" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033791 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033802 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033812 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26267a37-c8e7-45b3-af7f-8050a58cb697" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033820 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="26267a37-c8e7-45b3-af7f-8050a58cb697" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033837 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033845 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033856 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d291ef2c-2cdb-47be-b508-efd4c8282791" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033863 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d291ef2c-2cdb-47be-b508-efd4c8282791" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033884 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033892 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034085 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d291ef2c-2cdb-47be-b508-efd4c8282791" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034102 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034117 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034134 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5490b62-8700-4c9c-b4f7-517c71f91c46" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034143 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="26267a37-c8e7-45b3-af7f-8050a58cb697" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034151 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034892 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.037204 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.053984 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.167177 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.167265 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.268913 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.269005 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.270016 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.308173 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.361158 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79vbd" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.155184 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.156356 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.160912 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.166686 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.190071 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.285812 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.285881 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.285926 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.387434 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.387504 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.387550 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.388548 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.397456 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.415206 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.495528 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.687845 4984 generic.go:334] "Generic (PLEG): container finished" podID="bfce8525-20d3-4c57-9638-37a46571c375" containerID="b2c5eedb1976c1f88ba872ebef95c16d2cb8d47db5e197de1d5f09d25aea4f90" exitCode=0 Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.688131 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v95fj" event={"ID":"bfce8525-20d3-4c57-9638-37a46571c375","Type":"ContainerDied","Data":"b2c5eedb1976c1f88ba872ebef95c16d2cb8d47db5e197de1d5f09d25aea4f90"} Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.408477 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v95fj" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.528007 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") pod \"bfce8525-20d3-4c57-9638-37a46571c375\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.528181 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") pod \"bfce8525-20d3-4c57-9638-37a46571c375\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.528263 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") pod \"bfce8525-20d3-4c57-9638-37a46571c375\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.528315 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") pod \"bfce8525-20d3-4c57-9638-37a46571c375\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.533808 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn" (OuterVolumeSpecName: "kube-api-access-ft8sn") pod "bfce8525-20d3-4c57-9638-37a46571c375" (UID: "bfce8525-20d3-4c57-9638-37a46571c375"). InnerVolumeSpecName "kube-api-access-ft8sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.555805 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bfce8525-20d3-4c57-9638-37a46571c375" (UID: "bfce8525-20d3-4c57-9638-37a46571c375"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.555944 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfce8525-20d3-4c57-9638-37a46571c375" (UID: "bfce8525-20d3-4c57-9638-37a46571c375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.597750 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data" (OuterVolumeSpecName: "config-data") pod "bfce8525-20d3-4c57-9638-37a46571c375" (UID: "bfce8525-20d3-4c57-9638-37a46571c375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.630455 4984 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.630481 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.630490 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.630523 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.713307 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"c9ff95e5d9c3f7e2c0351ae47b4717eac8980d45a30b5c0fbbc2e013b3c72671"} Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.714308 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v95fj" event={"ID":"bfce8525-20d3-4c57-9638-37a46571c375","Type":"ContainerDied","Data":"263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d"} Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.714346 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.714368 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v95fj" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.783538 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.797667 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:30:02 crc kubenswrapper[4984]: W0130 10:30:02.798883 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c13999b_7269_403d_8be6_78d42f65f26c.slice/crio-d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6 WatchSource:0}: Error finding container d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6: Status 404 returned error can't find the container with id d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6 Jan 30 10:30:02 crc kubenswrapper[4984]: W0130 10:30:02.802681 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07e5b8e4_a5a3_4cca_bb11_a627d40f3dc1.slice/crio-286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05 WatchSource:0}: Error finding container 286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05: Status 404 returned error can't find the container with id 286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05 Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.001120 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.001475 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.083935 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:03 crc kubenswrapper[4984]: E0130 10:30:03.084297 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfce8525-20d3-4c57-9638-37a46571c375" containerName="glance-db-sync" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.084309 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfce8525-20d3-4c57-9638-37a46571c375" containerName="glance-db-sync" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.084479 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfce8525-20d3-4c57-9638-37a46571c375" containerName="glance-db-sync" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.085496 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.098833 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254319 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254427 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254517 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.355922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.355964 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.356006 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.356046 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.356090 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.356999 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.357028 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.358009 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.362029 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.374826 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.404896 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.733404 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-whl8p" event={"ID":"58c1d730-34f1-4912-a0e9-f19d10e9ec9b","Type":"ContainerStarted","Data":"429ee7ba89918111f347a6702ac0f612b104b982e26e880afe61da5e67302534"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.745780 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"7ec6f6ab3452d9616862534ade3255f4ce8a1976c78b1253278295d1327bedf7"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.745834 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"d6a28f165cb828c824461c4855e664f5762b43f3523c7ff8582e2bd062832103"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.745846 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"2bc42dc2361aaae3eac11f19a64fd2db20593f996c5450204d788d00e4a9cbfc"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.750911 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79vbd" event={"ID":"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1","Type":"ContainerStarted","Data":"9ee1ed553aa82f1b58b8003898aacd5f65036545ee6400dc5b131af172873423"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.750965 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79vbd" event={"ID":"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1","Type":"ContainerStarted","Data":"286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.752861 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-whl8p" podStartSLOduration=2.805698036 podStartE2EDuration="9.752844406s" podCreationTimestamp="2026-01-30 10:29:54 +0000 UTC" firstStartedPulling="2026-01-30 10:29:55.734450059 +0000 UTC m=+1100.300753893" lastFinishedPulling="2026-01-30 10:30:02.681596419 +0000 UTC m=+1107.247900263" observedRunningTime="2026-01-30 10:30:03.748107206 +0000 UTC m=+1108.314411030" watchObservedRunningTime="2026-01-30 10:30:03.752844406 +0000 UTC m=+1108.319148230" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.767375 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c13999b-7269-403d-8be6-78d42f65f26c" containerID="673987907c6890a3da91b3b133a9ad126ca5110425aedf8c5b019ce181470176" exitCode=0 Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.767425 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" event={"ID":"5c13999b-7269-403d-8be6-78d42f65f26c","Type":"ContainerDied","Data":"673987907c6890a3da91b3b133a9ad126ca5110425aedf8c5b019ce181470176"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.767451 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" event={"ID":"5c13999b-7269-403d-8be6-78d42f65f26c","Type":"ContainerStarted","Data":"d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.769220 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-79vbd" podStartSLOduration=4.769201291 podStartE2EDuration="4.769201291s" podCreationTimestamp="2026-01-30 10:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:03.765516991 +0000 UTC m=+1108.331820825" watchObservedRunningTime="2026-01-30 10:30:03.769201291 +0000 UTC m=+1108.335505115" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.901193 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.776543 4984 generic.go:334] "Generic (PLEG): container finished" podID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" containerID="9ee1ed553aa82f1b58b8003898aacd5f65036545ee6400dc5b131af172873423" exitCode=0 Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.776639 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79vbd" event={"ID":"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1","Type":"ContainerDied","Data":"9ee1ed553aa82f1b58b8003898aacd5f65036545ee6400dc5b131af172873423"} Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.779735 4984 generic.go:334] "Generic (PLEG): container finished" podID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerID="a3f7cfb2a4a336f740db618b7e51bd18d2a14a7494b62a64dc35117351fff550" exitCode=0 Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.779811 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerDied","Data":"a3f7cfb2a4a336f740db618b7e51bd18d2a14a7494b62a64dc35117351fff550"} Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.779839 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerStarted","Data":"6d30e54b31da488e5549e257dd0db6b5ad4bbb86668a8fa8669b611ed7db9b43"} Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.809236 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"55e7a1871bdaa3502d542ee6951a043704271b2d5d204c7e19e40d691c88e49e"} Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.869142 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=28.726632932 podStartE2EDuration="35.86911968s" podCreationTimestamp="2026-01-30 10:29:29 +0000 UTC" firstStartedPulling="2026-01-30 10:29:50.916709248 +0000 UTC m=+1095.483013112" lastFinishedPulling="2026-01-30 10:29:58.059196036 +0000 UTC m=+1102.625499860" observedRunningTime="2026-01-30 10:30:04.863016523 +0000 UTC m=+1109.429320357" watchObservedRunningTime="2026-01-30 10:30:04.86911968 +0000 UTC m=+1109.435423504" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.139997 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.218508 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.248084 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:30:05 crc kubenswrapper[4984]: E0130 10:30:05.248503 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c13999b-7269-403d-8be6-78d42f65f26c" containerName="collect-profiles" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.248527 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c13999b-7269-403d-8be6-78d42f65f26c" containerName="collect-profiles" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.248734 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c13999b-7269-403d-8be6-78d42f65f26c" containerName="collect-profiles" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.249862 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.251787 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.259931 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.286776 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") pod \"5c13999b-7269-403d-8be6-78d42f65f26c\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.286921 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") pod \"5c13999b-7269-403d-8be6-78d42f65f26c\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.286979 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") pod \"5c13999b-7269-403d-8be6-78d42f65f26c\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.287899 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c13999b-7269-403d-8be6-78d42f65f26c" (UID: "5c13999b-7269-403d-8be6-78d42f65f26c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.302953 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979" (OuterVolumeSpecName: "kube-api-access-9q979") pod "5c13999b-7269-403d-8be6-78d42f65f26c" (UID: "5c13999b-7269-403d-8be6-78d42f65f26c"). InnerVolumeSpecName "kube-api-access-9q979". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.311901 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c13999b-7269-403d-8be6-78d42f65f26c" (UID: "5c13999b-7269-403d-8be6-78d42f65f26c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.388926 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389307 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389353 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389397 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389443 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389496 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389588 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389617 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389630 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.491825 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.491942 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492001 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492083 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492172 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492325 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492730 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.493052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.493071 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.493052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.493524 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.511023 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.634063 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.842413 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.842713 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" event={"ID":"5c13999b-7269-403d-8be6-78d42f65f26c","Type":"ContainerDied","Data":"d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6"} Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.842794 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6" Jan 30 10:30:06 crc kubenswrapper[4984]: W0130 10:30:06.096790 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90d82977_c98d_495c_bb24_89cbe285c74e.slice/crio-95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c WatchSource:0}: Error finding container 95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c: Status 404 returned error can't find the container with id 95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.107342 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.137281 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79vbd" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.203114 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") pod \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.203608 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") pod \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.204012 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" (UID: "07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.209380 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442" (OuterVolumeSpecName: "kube-api-access-wq442") pod "07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" (UID: "07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1"). InnerVolumeSpecName "kube-api-access-wq442". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.305773 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.305816 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.851913 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerStarted","Data":"c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c"} Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.852969 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerStarted","Data":"95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c"} Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.854305 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79vbd" event={"ID":"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1","Type":"ContainerDied","Data":"286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05"} Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.854339 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.854387 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79vbd" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:07.862849 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerStarted","Data":"d68d99d8593f342738d78c0b5a8442f084e96aacd80060fc1757b47d9dea5bb4"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:07.863225 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:07.862957 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="dnsmasq-dns" containerID="cri-o://c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c" gracePeriod=10 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:07.887637 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" podStartSLOduration=4.887615919 podStartE2EDuration="4.887615919s" podCreationTimestamp="2026-01-30 10:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:07.879565319 +0000 UTC m=+1112.445869143" watchObservedRunningTime="2026-01-30 10:30:07.887615919 +0000 UTC m=+1112.453919743" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:09.879648 4984 generic.go:334] "Generic (PLEG): container finished" podID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerID="c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c" exitCode=0 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:09.879834 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerDied","Data":"c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:09.881470 4984 generic.go:334] "Generic (PLEG): container finished" podID="90d82977-c98d-495c-bb24-89cbe285c74e" containerID="d68d99d8593f342738d78c0b5a8442f084e96aacd80060fc1757b47d9dea5bb4" exitCode=0 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:09.881496 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerDied","Data":"d68d99d8593f342738d78c0b5a8442f084e96aacd80060fc1757b47d9dea5bb4"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.258867 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.373814 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.373917 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.373967 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.374076 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.374092 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.380374 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf" (OuterVolumeSpecName: "kube-api-access-nvxvf") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "kube-api-access-nvxvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.420905 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.425480 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.426653 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.432947 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config" (OuterVolumeSpecName: "config") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475453 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475487 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475501 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475514 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475526 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.616668 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.623865 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.897055 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerStarted","Data":"7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.898423 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerDied","Data":"6d30e54b31da488e5549e257dd0db6b5ad4bbb86668a8fa8669b611ed7db9b43"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.898451 4984 scope.go:117] "RemoveContainer" containerID="c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.898556 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.977113 4984 scope.go:117] "RemoveContainer" containerID="a3f7cfb2a4a336f740db618b7e51bd18d2a14a7494b62a64dc35117351fff550" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.982602 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.989216 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:11.907783 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:11.938873 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podStartSLOduration=6.938849645 podStartE2EDuration="6.938849645s" podCreationTimestamp="2026-01-30 10:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:11.932352228 +0000 UTC m=+1116.498656072" watchObservedRunningTime="2026-01-30 10:30:11.938849645 +0000 UTC m=+1116.505153469" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:12.102230 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" path="/var/lib/kubelet/pods/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1/volumes" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:12.103840 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" path="/var/lib/kubelet/pods/18c601d8-9d81-458a-b7d4-e0a68704af03/volumes" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.629412 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:30:25 crc kubenswrapper[4984]: E0130 10:30:15.630053 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="dnsmasq-dns" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630065 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="dnsmasq-dns" Jan 30 10:30:25 crc kubenswrapper[4984]: E0130 10:30:15.630080 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" containerName="mariadb-account-create-update" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630087 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" containerName="mariadb-account-create-update" Jan 30 10:30:25 crc kubenswrapper[4984]: E0130 10:30:15.630118 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="init" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630126 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="init" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630328 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="dnsmasq-dns" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630341 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" containerName="mariadb-account-create-update" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630849 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.633111 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.635644 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.646134 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.707065 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.707439 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" containerID="cri-o://ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0" gracePeriod=10 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.757697 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.757844 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.859497 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.859606 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.860426 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.883130 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.956860 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:17.974103 4984 generic.go:334] "Generic (PLEG): container finished" podID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerID="ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0" exitCode=0 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:17.974195 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerDied","Data":"ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:19.882587 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:24.883482 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.053069 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerDied","Data":"855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec"} Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.053435 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.101049 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141653 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141732 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141792 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141854 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141879 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.162638 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz" (OuterVolumeSpecName: "kube-api-access-57hbz") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "kube-api-access-57hbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.185482 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.193427 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.200843 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.214118 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config" (OuterVolumeSpecName: "config") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.217734 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243484 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243521 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243534 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243546 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243559 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:27 crc kubenswrapper[4984]: I0130 10:30:27.059352 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:30:27 crc kubenswrapper[4984]: I0130 10:30:27.059369 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l7nmz" event={"ID":"ca9a5e83-0bd4-4550-a3c9-e297cc831e99","Type":"ContainerStarted","Data":"38b8acdbf22de2e491ba98f9e89d9b406019ed29c7572f333da243974dfc7540"} Jan 30 10:30:27 crc kubenswrapper[4984]: I0130 10:30:27.098543 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:30:27 crc kubenswrapper[4984]: I0130 10:30:27.106418 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:30:28 crc kubenswrapper[4984]: I0130 10:30:28.072622 4984 generic.go:334] "Generic (PLEG): container finished" podID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" containerID="0e27973ea9b1e09e6fd759eac37e1b5558d22ece2091da32401b555f34855ccf" exitCode=0 Jan 30 10:30:28 crc kubenswrapper[4984]: I0130 10:30:28.072705 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l7nmz" event={"ID":"ca9a5e83-0bd4-4550-a3c9-e297cc831e99","Type":"ContainerDied","Data":"0e27973ea9b1e09e6fd759eac37e1b5558d22ece2091da32401b555f34855ccf"} Jan 30 10:30:28 crc kubenswrapper[4984]: I0130 10:30:28.103474 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" path="/var/lib/kubelet/pods/3333aa79-f6c6-4ae8-9b45-233127846dff/volumes" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.423541 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.497607 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") pod \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.497669 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") pod \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.498468 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca9a5e83-0bd4-4550-a3c9-e297cc831e99" (UID: "ca9a5e83-0bd4-4550-a3c9-e297cc831e99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.503680 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd" (OuterVolumeSpecName: "kube-api-access-rfbhd") pod "ca9a5e83-0bd4-4550-a3c9-e297cc831e99" (UID: "ca9a5e83-0bd4-4550-a3c9-e297cc831e99"). InnerVolumeSpecName "kube-api-access-rfbhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.600575 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.601733 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:30 crc kubenswrapper[4984]: I0130 10:30:30.092187 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:30 crc kubenswrapper[4984]: I0130 10:30:30.103678 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l7nmz" event={"ID":"ca9a5e83-0bd4-4550-a3c9-e297cc831e99","Type":"ContainerDied","Data":"38b8acdbf22de2e491ba98f9e89d9b406019ed29c7572f333da243974dfc7540"} Jan 30 10:30:30 crc kubenswrapper[4984]: I0130 10:30:30.103737 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b8acdbf22de2e491ba98f9e89d9b406019ed29c7572f333da243974dfc7540" Jan 30 10:30:33 crc kubenswrapper[4984]: I0130 10:30:33.000868 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:30:33 crc kubenswrapper[4984]: I0130 10:30:33.001405 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:30:34 crc kubenswrapper[4984]: I0130 10:30:34.153886 4984 generic.go:334] "Generic (PLEG): container finished" podID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" containerID="429ee7ba89918111f347a6702ac0f612b104b982e26e880afe61da5e67302534" exitCode=0 Jan 30 10:30:34 crc kubenswrapper[4984]: I0130 10:30:34.153959 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-whl8p" event={"ID":"58c1d730-34f1-4912-a0e9-f19d10e9ec9b","Type":"ContainerDied","Data":"429ee7ba89918111f347a6702ac0f612b104b982e26e880afe61da5e67302534"} Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.501385 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-whl8p" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.606036 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") pod \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.606073 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") pod \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.606280 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") pod \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.611486 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg" (OuterVolumeSpecName: "kube-api-access-nsfvg") pod "58c1d730-34f1-4912-a0e9-f19d10e9ec9b" (UID: "58c1d730-34f1-4912-a0e9-f19d10e9ec9b"). InnerVolumeSpecName "kube-api-access-nsfvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.627774 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58c1d730-34f1-4912-a0e9-f19d10e9ec9b" (UID: "58c1d730-34f1-4912-a0e9-f19d10e9ec9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.648886 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data" (OuterVolumeSpecName: "config-data") pod "58c1d730-34f1-4912-a0e9-f19d10e9ec9b" (UID: "58c1d730-34f1-4912-a0e9-f19d10e9ec9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.708364 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.708405 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.708415 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.177727 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-whl8p" event={"ID":"58c1d730-34f1-4912-a0e9-f19d10e9ec9b","Type":"ContainerDied","Data":"8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5"} Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.177793 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.177849 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-whl8p" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.435942 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:30:36 crc kubenswrapper[4984]: E0130 10:30:36.436501 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" containerName="mariadb-account-create-update" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436517 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" containerName="mariadb-account-create-update" Jan 30 10:30:36 crc kubenswrapper[4984]: E0130 10:30:36.436531 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436538 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" Jan 30 10:30:36 crc kubenswrapper[4984]: E0130 10:30:36.436567 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" containerName="keystone-db-sync" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436574 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" containerName="keystone-db-sync" Jan 30 10:30:36 crc kubenswrapper[4984]: E0130 10:30:36.436586 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="init" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436591 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="init" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436729 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" containerName="keystone-db-sync" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436749 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" containerName="mariadb-account-create-update" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436759 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.437304 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.439194 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.440585 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nsrjn" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.440883 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.440956 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.441804 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.450090 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.451904 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.472691 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.489575 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.526383 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528260 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528404 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528753 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528872 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528985 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538635 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538741 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538776 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538814 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538834 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538858 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.603428 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.604808 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.620978 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.621261 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-527mc" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.621489 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.622166 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.628824 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640559 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640624 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640643 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640671 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640699 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640739 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640770 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640799 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640818 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640837 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640854 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640872 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.641878 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.641994 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.642488 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.644600 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.652817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.656651 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.673010 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.676426 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.676924 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.681102 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.681143 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.708036 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.716313 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.717451 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.730317 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-t4jkv" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.731699 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.731988 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742566 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742621 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742656 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742740 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742761 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.754784 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.765478 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.768634 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.836231 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.837362 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.840778 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7t44v" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.840952 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.841056 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846148 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846221 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846309 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846354 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846396 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846417 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846453 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846485 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846511 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846572 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846626 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.850143 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.850533 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.851348 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.854537 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.856412 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.860785 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.861013 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.864056 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.870956 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.887695 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.902559 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.903874 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.934912 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.955553 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956805 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956861 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956888 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956909 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956952 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956982 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966209 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966270 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966288 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966313 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966332 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966377 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966413 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966566 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966438 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970549 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970592 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970631 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970711 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970757 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970784 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.979968 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.984517 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.985813 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.987060 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:36.999843 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.008808 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.036504 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.040239 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.054162 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.054358 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.054464 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gnpsj" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072178 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072224 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072273 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072296 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072312 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072329 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072359 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072377 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072395 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072410 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072428 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072444 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072481 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072504 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072535 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.073698 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.074007 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.074069 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.077638 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.088062 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.099836 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.100299 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.102238 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.103289 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.105092 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.109283 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.115181 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.115666 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.119085 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.120125 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.133342 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.134205 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176292 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176349 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176442 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176498 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176594 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.193073 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.214327 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.215692 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.230859 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.231433 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.238471 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.238639 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dbvq5" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.246549 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.258800 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.262266 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.276484 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.278089 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.278912 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.278952 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.278981 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.279013 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.279065 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.279122 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.279976 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.280109 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.280340 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.283478 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.283652 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-94rmf" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.283915 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.287450 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.287755 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.287886 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.290517 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.293471 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.301378 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.314560 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.343728 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382100 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382150 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382176 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382191 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382319 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382514 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382546 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382587 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382657 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382685 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382713 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.383216 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.383267 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.383285 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.383306 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.387411 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.388406 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.405350 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.407234 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485397 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485436 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485472 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485489 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485525 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485543 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485560 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485641 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485657 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485681 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485699 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485715 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485736 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.487493 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.488398 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.488662 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.489690 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.490197 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.490688 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.490936 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.491658 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.492552 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.494673 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.499528 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.505630 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.506617 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.507591 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.559939 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.560862 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.580606 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.597622 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.611512 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.627035 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.709702 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.711398 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.715267 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.715301 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.731597 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.739648 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793617 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793880 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793925 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793943 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793980 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.794016 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.794041 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.794059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.814229 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.823516 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895673 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895723 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895740 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895777 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895837 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895854 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.896406 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.898245 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.899573 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.901754 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.908577 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.909411 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.921081 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.922607 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.923528 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.950127 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.048502 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:30:38 crc kubenswrapper[4984]: W0130 10:30:38.056105 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4548afd2_be23_4ea7_a5a4_14b8fad4f5fa.slice/crio-fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4 WatchSource:0}: Error finding container fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4: Status 404 returned error can't find the container with id fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4 Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.057682 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.066917 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.216939 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c95864f45-hf2gl" event={"ID":"ea17d23b-4f8b-425c-bc10-f6bd35f661bf","Type":"ContainerStarted","Data":"a0f8ade3bd22a088d1a93fb53e8dade691d3e07b79328b0d3ec8e2f6f2ae944b"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.218415 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerStarted","Data":"33596f8966073af30764199a2a914ff3e9f8caa7ca44b53b652d4e885a08aa2f"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.219508 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4q4x7" event={"ID":"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80","Type":"ContainerStarted","Data":"bac16c50dbc54a56989a819b2fb558872bce9cd29de279c380d506e6e46a94f0"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.220380 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfzdw" event={"ID":"3048d738-67a2-417f-91ca-8993f4b557f1","Type":"ContainerStarted","Data":"8c1d0f7dc02303cf5bb0d029a247772d55790cec54bba645727d9dadb4e4bde2"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.222398 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" event={"ID":"1179d293-414e-4b1d-8020-37147612b45f","Type":"ContainerStarted","Data":"b218f58fbde2b318e5bfbbfe5598d78b25851ef8311979e7ba06842db2e42d84"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.225692 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62mq2" event={"ID":"34fee7b8-8c52-498f-a9b2-ed2b18f555cc","Type":"ContainerStarted","Data":"b68d87d9bb96723ca8797d2627af124dde7df989f62624faf45fa6f9a02a018d"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.227204 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5bb97f77-vgk6b" event={"ID":"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa","Type":"ContainerStarted","Data":"fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.228242 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5hx59" event={"ID":"2405c6ec-2510-4786-a602-ae85d358ed1f","Type":"ContainerStarted","Data":"b22cfaa6ea4686fc0571245806e8e06ec7680b75dec3155d20471ab3af1337c6"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.255815 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.290232 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: W0130 10:30:38.328684 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17f579b7_9f28_42f6_a7be_b7c562962f19.slice/crio-46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1 WatchSource:0}: Error finding container 46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1: Status 404 returned error can't find the container with id 46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1 Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.329796 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.461206 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.516578 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.549788 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.557440 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.604906 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.624913 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.625020 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.625054 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.625098 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.625125 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.627392 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.660333 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731294 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731353 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731401 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731424 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731469 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.733657 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.734045 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.734239 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.735853 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.782986 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.798828 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.868679 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: W0130 10:30:38.871219 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb045c802_b737_4590_82c8_e8a3a54247dc.slice/crio-a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a WatchSource:0}: Error finding container a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a: Status 404 returned error can't find the container with id a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.243433 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62mq2" event={"ID":"34fee7b8-8c52-498f-a9b2-ed2b18f555cc","Type":"ContainerStarted","Data":"82eaf4a70aa0bb3862fc793e3843dbf8a715aef600755d897602de67f43a4990"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.248478 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerStarted","Data":"a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.255009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5hx59" event={"ID":"2405c6ec-2510-4786-a602-ae85d358ed1f","Type":"ContainerStarted","Data":"886c26fc093739c495beed5c6f76e0e1f2d0d794ded30c68297ca382924af529"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.257234 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxnz6" event={"ID":"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1","Type":"ContainerStarted","Data":"c150541fb16c40a06f1f4b6b64bfff01ebc0687acbccdc05a1f6c7f17f0d9920"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.267041 4984 generic.go:334] "Generic (PLEG): container finished" podID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerID="2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d" exitCode=0 Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.267373 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerDied","Data":"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.267414 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerStarted","Data":"46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.273361 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerStarted","Data":"b1fa41772b71982e34712806d8e4239d302c810441b3b76f99d64938a74e6924"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.281159 4984 generic.go:334] "Generic (PLEG): container finished" podID="1179d293-414e-4b1d-8020-37147612b45f" containerID="622c37ecc830de634c356c96b7be30fd48b8c32ceb786ebc7543d1e4ebb9245c" exitCode=0 Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.281223 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" event={"ID":"1179d293-414e-4b1d-8020-37147612b45f","Type":"ContainerDied","Data":"622c37ecc830de634c356c96b7be30fd48b8c32ceb786ebc7543d1e4ebb9245c"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.286782 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5hx59" podStartSLOduration=3.286765196 podStartE2EDuration="3.286765196s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:39.281705969 +0000 UTC m=+1143.848009793" watchObservedRunningTime="2026-01-30 10:30:39.286765196 +0000 UTC m=+1143.853069020" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.289773 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-62mq2" podStartSLOduration=3.289764468 podStartE2EDuration="3.289764468s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:39.265401954 +0000 UTC m=+1143.831705778" watchObservedRunningTime="2026-01-30 10:30:39.289764468 +0000 UTC m=+1143.856068282" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.414840 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.608239 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.680112 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.680283 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.680762 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.681404 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.681437 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.681476 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.689402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn" (OuterVolumeSpecName: "kube-api-access-2m4vn") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "kube-api-access-2m4vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.735402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.735913 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.736769 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.737448 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config" (OuterVolumeSpecName: "config") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.735571 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791558 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791600 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791614 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791625 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791634 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791644 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.297280 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerStarted","Data":"ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.304472 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerStarted","Data":"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.304549 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.311121 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerStarted","Data":"cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.316676 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.316822 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" event={"ID":"1179d293-414e-4b1d-8020-37147612b45f","Type":"ContainerDied","Data":"b218f58fbde2b318e5bfbbfe5598d78b25851ef8311979e7ba06842db2e42d84"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.316885 4984 scope.go:117] "RemoveContainer" containerID="622c37ecc830de634c356c96b7be30fd48b8c32ceb786ebc7543d1e4ebb9245c" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.319553 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59c58ffc9c-jj2qg" event={"ID":"c1e19dda-69fc-437b-b42c-727c3cff3813","Type":"ContainerStarted","Data":"9a4e15ad5e7e83774d55c10e4f5efd7b7cb63c50cf1ac1e695342968f871f85d"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.358074 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" podStartSLOduration=4.358050865 podStartE2EDuration="4.358050865s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:40.327872442 +0000 UTC m=+1144.894176266" watchObservedRunningTime="2026-01-30 10:30:40.358050865 +0000 UTC m=+1144.924354689" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.374878 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.383955 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.332058 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerStarted","Data":"ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588"} Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.334749 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerStarted","Data":"6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d"} Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.334964 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-log" containerID="cri-o://cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c" gracePeriod=30 Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.335051 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-httpd" containerID="cri-o://6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d" gracePeriod=30 Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.357623 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.357605868 podStartE2EDuration="4.357605868s" podCreationTimestamp="2026-01-30 10:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:41.356319312 +0000 UTC m=+1145.922623136" watchObservedRunningTime="2026-01-30 10:30:41.357605868 +0000 UTC m=+1145.923909692" Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.108714 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1179d293-414e-4b1d-8020-37147612b45f" path="/var/lib/kubelet/pods/1179d293-414e-4b1d-8020-37147612b45f/volumes" Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.347915 4984 generic.go:334] "Generic (PLEG): container finished" podID="cfc24334-4217-4656-9b38-281626334606" containerID="cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c" exitCode=143 Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.348013 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerDied","Data":"cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c"} Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.348090 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-log" containerID="cri-o://ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719" gracePeriod=30 Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.348177 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-httpd" containerID="cri-o://ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588" gracePeriod=30 Jan 30 10:30:42 crc kubenswrapper[4984]: E0130 10:30:42.624364 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb045c802_b737_4590_82c8_e8a3a54247dc.slice/crio-ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588.scope\": RecentStats: unable to find data in memory cache]" Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.360209 4984 generic.go:334] "Generic (PLEG): container finished" podID="cfc24334-4217-4656-9b38-281626334606" containerID="6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d" exitCode=0 Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.360289 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerDied","Data":"6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d"} Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.362922 4984 generic.go:334] "Generic (PLEG): container finished" podID="b045c802-b737-4590-82c8-e8a3a54247dc" containerID="ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588" exitCode=0 Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.362950 4984 generic.go:334] "Generic (PLEG): container finished" podID="b045c802-b737-4590-82c8-e8a3a54247dc" containerID="ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719" exitCode=143 Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.362968 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerDied","Data":"ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588"} Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.362994 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerDied","Data":"ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719"} Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.085893 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.085871631 podStartE2EDuration="9.085871631s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:42.378686577 +0000 UTC m=+1146.944990411" watchObservedRunningTime="2026-01-30 10:30:45.085871631 +0000 UTC m=+1149.652175455" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.095115 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.148192 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:30:45 crc kubenswrapper[4984]: E0130 10:30:45.148897 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1179d293-414e-4b1d-8020-37147612b45f" containerName="init" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.148924 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1179d293-414e-4b1d-8020-37147612b45f" containerName="init" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.149237 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1179d293-414e-4b1d-8020-37147612b45f" containerName="init" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.150630 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.154455 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.192116 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.198670 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.198709 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.198741 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.198769 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.199270 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.199354 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.199565 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.219871 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.233067 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cb76cb6cb-wtx8d"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.237848 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.249066 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cb76cb6cb-wtx8d"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301017 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301071 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-combined-ca-bundle\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301093 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301110 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301127 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-secret-key\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301160 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301176 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-scripts\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301386 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-config-data\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301523 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgpl8\" (UniqueName: \"kubernetes.io/projected/d1c7d24e-f131-485d-aaec-80a94d7ddd96-kube-api-access-sgpl8\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301586 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c7d24e-f131-485d-aaec-80a94d7ddd96-logs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301622 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301642 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301676 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-tls-certs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301707 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.302198 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.302424 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.306877 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.307387 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.318337 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.332801 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403436 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-config-data\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403539 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgpl8\" (UniqueName: \"kubernetes.io/projected/d1c7d24e-f131-485d-aaec-80a94d7ddd96-kube-api-access-sgpl8\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403631 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c7d24e-f131-485d-aaec-80a94d7ddd96-logs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403713 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-tls-certs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403767 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-combined-ca-bundle\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403793 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-secret-key\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403830 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-scripts\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.404042 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c7d24e-f131-485d-aaec-80a94d7ddd96-logs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.404844 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-scripts\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.405728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-config-data\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.407760 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-combined-ca-bundle\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.407768 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-tls-certs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.408521 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-secret-key\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.423402 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgpl8\" (UniqueName: \"kubernetes.io/projected/d1c7d24e-f131-485d-aaec-80a94d7ddd96-kube-api-access-sgpl8\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.483170 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.558323 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:47 crc kubenswrapper[4984]: I0130 10:30:47.614522 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:47 crc kubenswrapper[4984]: I0130 10:30:47.698742 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:30:47 crc kubenswrapper[4984]: I0130 10:30:47.699052 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" containerID="cri-o://7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116" gracePeriod=10 Jan 30 10:30:49 crc kubenswrapper[4984]: I0130 10:30:49.420440 4984 generic.go:334] "Generic (PLEG): container finished" podID="90d82977-c98d-495c-bb24-89cbe285c74e" containerID="7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116" exitCode=0 Jan 30 10:30:49 crc kubenswrapper[4984]: I0130 10:30:49.420503 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerDied","Data":"7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116"} Jan 30 10:30:50 crc kubenswrapper[4984]: I0130 10:30:50.635588 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: connect: connection refused" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.705590 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.706114 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h548h5b7h8bh5bch5c9h66fhc5hf6h59h589h9bh87h5cdh586h5f7h8bh65fh64h688h68hcch9h649h96h9fh88h664hc9h559h654hb8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hnsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7c95864f45-hf2gl_openstack(ea17d23b-4f8b-425c-bc10-f6bd35f661bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.710877 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7c95864f45-hf2gl" podUID="ea17d23b-4f8b-425c-bc10-f6bd35f661bf" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.862451 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.862673 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n547h95h5c7h64h54dh554h664h665hf7h79h648h649hbfh666hbbh5d9h588h5d7h5bch564h687h658h79h597hb8h56fhf4h644h5cch5d9h5chc4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ws2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6c5bb97f77-vgk6b_openstack(4548afd2-be23-4ea7-a5a4-14b8fad4f5fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.865104 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6c5bb97f77-vgk6b" podUID="4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.413613 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.466334 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.495634 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.495661 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerDied","Data":"a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a"} Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.495706 4984 scope.go:117] "RemoveContainer" containerID="ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.500383 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerDied","Data":"b1fa41772b71982e34712806d8e4239d302c810441b3b76f99d64938a74e6924"} Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.500417 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.503062 4984 generic.go:334] "Generic (PLEG): container finished" podID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" containerID="82eaf4a70aa0bb3862fc793e3843dbf8a715aef600755d897602de67f43a4990" exitCode=0 Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.503135 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62mq2" event={"ID":"34fee7b8-8c52-498f-a9b2-ed2b18f555cc","Type":"ContainerDied","Data":"82eaf4a70aa0bb3862fc793e3843dbf8a715aef600755d897602de67f43a4990"} Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519725 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519857 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519911 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519944 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519971 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519998 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520036 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520071 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520087 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520110 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520136 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520150 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520174 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520191 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520209 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520225 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.522397 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.523582 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs" (OuterVolumeSpecName: "logs") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.536679 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts" (OuterVolumeSpecName: "scripts") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.538718 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.538897 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x" (OuterVolumeSpecName: "kube-api-access-sw66x") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "kube-api-access-sw66x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.540614 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs" (OuterVolumeSpecName: "logs") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.540982 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.543408 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.557027 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6" (OuterVolumeSpecName: "kube-api-access-b5lt6") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "kube-api-access-b5lt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.584878 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.586086 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts" (OuterVolumeSpecName: "scripts") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.591079 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.607574 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data" (OuterVolumeSpecName: "config-data") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.615425 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.616901 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data" (OuterVolumeSpecName: "config-data") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623585 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623636 4984 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623662 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623671 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623704 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623712 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623720 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623730 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623743 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623751 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623784 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623793 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623801 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623809 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623817 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.629480 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.640903 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.643936 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.725355 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.725389 4984 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.725404 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.848070 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.867908 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.882577 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.893690 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.920752 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: E0130 10:30:55.921194 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921218 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: E0130 10:30:55.921234 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921242 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: E0130 10:30:55.921284 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921292 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: E0130 10:30:55.921321 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921328 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921515 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921541 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921556 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921572 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.922621 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.926950 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.927553 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-94rmf" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.927568 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.927794 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.938862 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.941944 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.946382 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.946673 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.948877 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.957831 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030583 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030648 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030679 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030702 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030730 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030757 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030773 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030793 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030811 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030842 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030867 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030890 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030920 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030952 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030974 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030996 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.103611 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" path="/var/lib/kubelet/pods/b045c802-b737-4590-82c8-e8a3a54247dc/volumes" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.106600 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc24334-4217-4656-9b38-281626334606" path="/var/lib/kubelet/pods/cfc24334-4217-4656-9b38-281626334606/volumes" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132583 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132661 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132700 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132786 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132828 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132849 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132872 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132899 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132921 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132938 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132959 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132977 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133003 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133028 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133043 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133076 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133380 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133663 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.134240 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.134475 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.138405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.139073 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.139705 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.141914 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.142749 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.143287 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.145509 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.146616 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.148559 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.149114 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.154222 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.155423 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.167278 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.171740 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.251106 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.261068 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:31:00 crc kubenswrapper[4984]: I0130 10:31:00.634665 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.000404 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.000718 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.000793 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.001679 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.001751 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796" gracePeriod=600 Jan 30 10:31:03 crc kubenswrapper[4984]: E0130 10:31:03.062193 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c1bd910_b683_42bf_966f_51a04ac18bd2.slice/crio-337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796.scope\": RecentStats: unable to find data in memory cache]" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.577946 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796" exitCode=0 Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.577983 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796"} Jan 30 10:31:05 crc kubenswrapper[4984]: I0130 10:31:05.636210 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 30 10:31:05 crc kubenswrapper[4984]: I0130 10:31:05.636928 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:31:08 crc kubenswrapper[4984]: E0130 10:31:08.552838 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 10:31:08 crc kubenswrapper[4984]: E0130 10:31:08.553507 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n594hb7h664h5d6h7fh9bh5cbh55dh58dh694h5fh59ch67bh645h57h64h5bh566h5fdh669h5c8h8ch5f4h649h67dhc6h5b4h68fh559h546h64ch5f5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59c58ffc9c-jj2qg_openstack(c1e19dda-69fc-437b-b42c-727c3cff3813): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:08 crc kubenswrapper[4984]: E0130 10:31:08.557464 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-59c58ffc9c-jj2qg" podUID="c1e19dda-69fc-437b-b42c-727c3cff3813" Jan 30 10:31:10 crc kubenswrapper[4984]: I0130 10:31:10.637888 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.009487 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.015449 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072179 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072307 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072354 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072400 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072426 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072521 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072566 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072607 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072640 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072658 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs" (OuterVolumeSpecName: "logs") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072701 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072908 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts" (OuterVolumeSpecName: "scripts") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073815 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073406 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs" (OuterVolumeSpecName: "logs") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073503 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts" (OuterVolumeSpecName: "scripts") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073972 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data" (OuterVolumeSpecName: "config-data") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073975 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data" (OuterVolumeSpecName: "config-data") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.079045 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.079150 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd" (OuterVolumeSpecName: "kube-api-access-4hnsd") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "kube-api-access-4hnsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.084456 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d" (OuterVolumeSpecName: "kube-api-access-2ws2d") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "kube-api-access-2ws2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.086409 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175639 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175691 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175710 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175725 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175737 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175754 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175767 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175779 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175791 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.692008 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c95864f45-hf2gl" event={"ID":"ea17d23b-4f8b-425c-bc10-f6bd35f661bf","Type":"ContainerDied","Data":"a0f8ade3bd22a088d1a93fb53e8dade691d3e07b79328b0d3ec8e2f6f2ae944b"} Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.692045 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.693477 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5bb97f77-vgk6b" event={"ID":"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa","Type":"ContainerDied","Data":"fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4"} Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.693587 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.789291 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.801825 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.817335 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.826098 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:31:14 crc kubenswrapper[4984]: I0130 10:31:14.102867 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" path="/var/lib/kubelet/pods/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa/volumes" Jan 30 10:31:14 crc kubenswrapper[4984]: I0130 10:31:14.118511 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea17d23b-4f8b-425c-bc10-f6bd35f661bf" path="/var/lib/kubelet/pods/ea17d23b-4f8b-425c-bc10-f6bd35f661bf/volumes" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.426154 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.426723 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8chzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-bfzdw_openstack(3048d738-67a2-417f-91ca-8993f4b557f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.428382 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-bfzdw" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.704113 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-bfzdw" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.946784 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.946967 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92h6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-pxnz6_openstack(84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.948153 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-pxnz6" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.038010 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.114905 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.114964 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.115089 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.115113 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.115131 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.115201 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.138556 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6" (OuterVolumeSpecName: "kube-api-access-6j8l6") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "kube-api-access-6j8l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.157204 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.159710 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.160560 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.163078 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.165471 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config" (OuterVolumeSpecName: "config") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217128 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217186 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217199 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217212 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217224 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217234 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.639346 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.712573 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerDied","Data":"95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c"} Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.712695 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:31:15 crc kubenswrapper[4984]: E0130 10:31:15.714935 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-pxnz6" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.766472 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.775400 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:31:16 crc kubenswrapper[4984]: I0130 10:31:16.107141 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" path="/var/lib/kubelet/pods/90d82977-c98d-495c-bb24-89cbe285c74e/volumes" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.081989 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.085879 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.086313 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrxhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4q4x7_openstack(67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.087532 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4q4x7" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.100341 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160173 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160511 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160714 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160791 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160821 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160859 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160908 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160946 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160990 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161091 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161138 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161731 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts" (OuterVolumeSpecName: "scripts") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161790 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs" (OuterVolumeSpecName: "logs") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161803 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data" (OuterVolumeSpecName: "config-data") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161943 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161961 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161972 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.166094 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d" (OuterVolumeSpecName: "kube-api-access-wwl5d") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "kube-api-access-wwl5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.166522 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd" (OuterVolumeSpecName: "kube-api-access-j94kd") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "kube-api-access-j94kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.166982 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.167275 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.169071 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts" (OuterVolumeSpecName: "scripts") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.179532 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.190034 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.192453 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data" (OuterVolumeSpecName: "config-data") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263609 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263651 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263666 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263677 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263694 4984 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263705 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263716 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263728 4984 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.488541 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.489092 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h659hcbh5d6h5c8h647h57bh677h5b7h88h69hdh84h7bh669h676hfbh5b6h688h657h88h5dbh5bfh96h686hbfh5d4h65fh64dh589h8fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nltq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(092048c5-1cfe-40c2-a319-23dde30a6c80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.514313 4984 scope.go:117] "RemoveContainer" containerID="ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.605428 4984 scope.go:117] "RemoveContainer" containerID="6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.607117 4984 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/glance-default-external-api-0_openstack_glance-httpd-6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d.log: no such file or directory" path="/var/log/containers/glance-default-external-api-0_openstack_glance-httpd-6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d.log" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.645286 4984 scope.go:117] "RemoveContainer" containerID="cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.682154 4984 scope.go:117] "RemoveContainer" containerID="b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.739116 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62mq2" event={"ID":"34fee7b8-8c52-498f-a9b2-ed2b18f555cc","Type":"ContainerDied","Data":"b68d87d9bb96723ca8797d2627af124dde7df989f62624faf45fa6f9a02a018d"} Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.739159 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b68d87d9bb96723ca8797d2627af124dde7df989f62624faf45fa6f9a02a018d" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.739312 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.742851 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.743065 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59c58ffc9c-jj2qg" event={"ID":"c1e19dda-69fc-437b-b42c-727c3cff3813","Type":"ContainerDied","Data":"9a4e15ad5e7e83774d55c10e4f5efd7b7cb63c50cf1ac1e695342968f871f85d"} Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.765778 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-4q4x7" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.772266 4984 scope.go:117] "RemoveContainer" containerID="7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.814788 4984 scope.go:117] "RemoveContainer" containerID="d68d99d8593f342738d78c0b5a8442f084e96aacd80060fc1757b47d9dea5bb4" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.826990 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.833652 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.947811 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.066559 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cb76cb6cb-wtx8d"] Jan 30 10:31:18 crc kubenswrapper[4984]: W0130 10:31:18.075675 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1c7d24e_f131_485d_aaec_80a94d7ddd96.slice/crio-258a64267aa46e4c16dbc5e515c48e6ba4821ac8f3927652378149d00d65f2eb WatchSource:0}: Error finding container 258a64267aa46e4c16dbc5e515c48e6ba4821ac8f3927652378149d00d65f2eb: Status 404 returned error can't find the container with id 258a64267aa46e4c16dbc5e515c48e6ba4821ac8f3927652378149d00d65f2eb Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.118852 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e19dda-69fc-437b-b42c-727c3cff3813" path="/var/lib/kubelet/pods/c1e19dda-69fc-437b-b42c-727c3cff3813/volumes" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.168632 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.174756 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.273751 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:31:18 crc kubenswrapper[4984]: E0130 10:31:18.274093 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274106 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" Jan 30 10:31:18 crc kubenswrapper[4984]: E0130 10:31:18.274118 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="init" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274124 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="init" Jan 30 10:31:18 crc kubenswrapper[4984]: E0130 10:31:18.274143 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" containerName="keystone-bootstrap" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274152 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" containerName="keystone-bootstrap" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274356 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274377 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" containerName="keystone-bootstrap" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.276976 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281047 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nsrjn" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281268 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281388 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281497 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281776 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.290177 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389562 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389635 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389809 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389874 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389942 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.390145 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.492816 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493110 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493314 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493503 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493609 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.499115 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.499634 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.500834 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.505810 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.506316 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.520290 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.599207 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.764647 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerStarted","Data":"69bd05a6495e5cb7cdf4e1d3db592b4ecb95799d07ea2642a2cb5673af58d135"} Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.766513 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb76cb6cb-wtx8d" event={"ID":"d1c7d24e-f131-485d-aaec-80a94d7ddd96","Type":"ContainerStarted","Data":"258a64267aa46e4c16dbc5e515c48e6ba4821ac8f3927652378149d00d65f2eb"} Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.769514 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.066560 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:31:19 crc kubenswrapper[4984]: W0130 10:31:19.254289 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5a91d1d_433e_415f_83f8_04185f2bae8e.slice/crio-81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b WatchSource:0}: Error finding container 81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b: Status 404 returned error can't find the container with id 81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.475447 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:31:19 crc kubenswrapper[4984]: W0130 10:31:19.495310 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ce38a2_070f_4aac_9495_d27d915c5ae1.slice/crio-0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4 WatchSource:0}: Error finding container 0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4: Status 404 returned error can't find the container with id 0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4 Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.809493 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb89x" event={"ID":"e6ce38a2-070f-4aac-9495-d27d915c5ae1","Type":"ContainerStarted","Data":"ff2e43e014ee433edf02ecd3b11995f34ff686f322770fec87dcc986576c77fd"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.809736 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb89x" event={"ID":"e6ce38a2-070f-4aac-9495-d27d915c5ae1","Type":"ContainerStarted","Data":"0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.809843 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.813655 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerStarted","Data":"81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.824656 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerStarted","Data":"da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.830264 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qb89x" podStartSLOduration=1.8302334139999998 podStartE2EDuration="1.830233414s" podCreationTimestamp="2026-01-30 10:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:19.825855965 +0000 UTC m=+1184.392159789" watchObservedRunningTime="2026-01-30 10:31:19.830233414 +0000 UTC m=+1184.396537238" Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.842886 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb76cb6cb-wtx8d" event={"ID":"d1c7d24e-f131-485d-aaec-80a94d7ddd96","Type":"ContainerStarted","Data":"54296f32da4ae48ad1ffc63b9f73dfd3b05f17ecbf08112bb356c4210bf7eeba"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.854391 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerStarted","Data":"5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.854427 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerStarted","Data":"e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.883463 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b65cc758d-9hz7t" podStartSLOduration=33.841951689 podStartE2EDuration="34.883447724s" podCreationTimestamp="2026-01-30 10:30:45 +0000 UTC" firstStartedPulling="2026-01-30 10:31:17.94527295 +0000 UTC m=+1182.511576774" lastFinishedPulling="2026-01-30 10:31:18.986768985 +0000 UTC m=+1183.553072809" observedRunningTime="2026-01-30 10:31:19.880527525 +0000 UTC m=+1184.446831339" watchObservedRunningTime="2026-01-30 10:31:19.883447724 +0000 UTC m=+1184.449751548" Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.109414 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" path="/var/lib/kubelet/pods/34fee7b8-8c52-498f-a9b2-ed2b18f555cc/volumes" Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.884236 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb76cb6cb-wtx8d" event={"ID":"d1c7d24e-f131-485d-aaec-80a94d7ddd96","Type":"ContainerStarted","Data":"10fdf60d10e120f734c85f0a1581a5c707b14d2b9d601ff8411e37e16ff37617"} Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.887864 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerStarted","Data":"b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540"} Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.887902 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerStarted","Data":"8a1e7d08bcb7a1c10909d3b6f8549348ca67f5b537c84b6ec8529217335158a6"} Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.897351 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerStarted","Data":"26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09"} Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.906869 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cb76cb6cb-wtx8d" podStartSLOduration=34.595054695 podStartE2EDuration="35.906854087s" podCreationTimestamp="2026-01-30 10:30:45 +0000 UTC" firstStartedPulling="2026-01-30 10:31:18.07809198 +0000 UTC m=+1182.644395804" lastFinishedPulling="2026-01-30 10:31:19.389891372 +0000 UTC m=+1183.956195196" observedRunningTime="2026-01-30 10:31:20.900839833 +0000 UTC m=+1185.467143657" watchObservedRunningTime="2026-01-30 10:31:20.906854087 +0000 UTC m=+1185.473157911" Jan 30 10:31:21 crc kubenswrapper[4984]: I0130 10:31:21.905557 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerStarted","Data":"f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b"} Jan 30 10:31:21 crc kubenswrapper[4984]: I0130 10:31:21.910097 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerStarted","Data":"01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d"} Jan 30 10:31:21 crc kubenswrapper[4984]: I0130 10:31:21.933470 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.933451876 podStartE2EDuration="26.933451876s" podCreationTimestamp="2026-01-30 10:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:21.921712037 +0000 UTC m=+1186.488015861" watchObservedRunningTime="2026-01-30 10:31:21.933451876 +0000 UTC m=+1186.499755690" Jan 30 10:31:21 crc kubenswrapper[4984]: I0130 10:31:21.951308 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=26.951283533 podStartE2EDuration="26.951283533s" podCreationTimestamp="2026-01-30 10:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:21.945169006 +0000 UTC m=+1186.511472840" watchObservedRunningTime="2026-01-30 10:31:21.951283533 +0000 UTC m=+1186.517587377" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.484178 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.485904 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.559140 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.559312 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.953432 4984 generic.go:334] "Generic (PLEG): container finished" podID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" containerID="ff2e43e014ee433edf02ecd3b11995f34ff686f322770fec87dcc986576c77fd" exitCode=0 Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.953752 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb89x" event={"ID":"e6ce38a2-070f-4aac-9495-d27d915c5ae1","Type":"ContainerDied","Data":"ff2e43e014ee433edf02ecd3b11995f34ff686f322770fec87dcc986576c77fd"} Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.251775 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.252142 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.252159 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.252172 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.261424 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.261481 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.261673 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.261790 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.287672 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.289708 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.317213 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.336750 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.145884 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303505 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303648 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303685 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303715 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303742 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303798 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.318109 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w" (OuterVolumeSpecName: "kube-api-access-ggl7w") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "kube-api-access-ggl7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.377342 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.377999 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts" (OuterVolumeSpecName: "scripts") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.405757 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.405800 4984 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.405813 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.415917 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.415980 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.416083 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data" (OuterVolumeSpecName: "config-data") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.507577 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.507614 4984 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.507631 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.991885 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb89x" event={"ID":"e6ce38a2-070f-4aac-9495-d27d915c5ae1","Type":"ContainerDied","Data":"0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4"} Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.991922 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.992028 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.998809 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.998917 4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.154900 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.154997 4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.157493 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.273057 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9fd9687b7-kdppr"] Jan 30 10:31:29 crc kubenswrapper[4984]: E0130 10:31:29.273424 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" containerName="keystone-bootstrap" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.273439 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" containerName="keystone-bootstrap" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.273600 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" containerName="keystone-bootstrap" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.274109 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.276882 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.278661 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.279092 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nsrjn" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.279346 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.279835 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.280161 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.291410 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9fd9687b7-kdppr"] Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.310358 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.422833 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-credential-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.422891 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-combined-ca-bundle\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.422929 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-public-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.422948 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-config-data\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.423076 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zx82\" (UniqueName: \"kubernetes.io/projected/0cddf025-bb36-4984-82b8-360ab9f3d91c-kube-api-access-5zx82\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.423134 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-internal-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.423178 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-scripts\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.423404 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-fernet-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.524848 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-public-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525177 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-config-data\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zx82\" (UniqueName: \"kubernetes.io/projected/0cddf025-bb36-4984-82b8-360ab9f3d91c-kube-api-access-5zx82\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525228 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-internal-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525264 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-scripts\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525356 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-fernet-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525424 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-credential-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525462 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-combined-ca-bundle\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.530875 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-public-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.531644 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-scripts\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.541944 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-combined-ca-bundle\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.542808 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-internal-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.542862 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-fernet-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.544935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-config-data\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.545839 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-credential-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.549357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zx82\" (UniqueName: \"kubernetes.io/projected/0cddf025-bb36-4984-82b8-360ab9f3d91c-kube-api-access-5zx82\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.592909 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:30 crc kubenswrapper[4984]: I0130 10:31:30.086893 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9fd9687b7-kdppr"] Jan 30 10:31:31 crc kubenswrapper[4984]: I0130 10:31:31.011416 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fd9687b7-kdppr" event={"ID":"0cddf025-bb36-4984-82b8-360ab9f3d91c","Type":"ContainerStarted","Data":"6232915df589444d72f77d65fcbb2851429349743e1cf5c2966d156f8cf417c1"} Jan 30 10:31:31 crc kubenswrapper[4984]: I0130 10:31:31.011966 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fd9687b7-kdppr" event={"ID":"0cddf025-bb36-4984-82b8-360ab9f3d91c","Type":"ContainerStarted","Data":"9f16e57b5268ca29551a4a9d6721691ca6c4503ade57184c5bcf50853ef3cbdb"} Jan 30 10:31:31 crc kubenswrapper[4984]: I0130 10:31:31.011988 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:31 crc kubenswrapper[4984]: I0130 10:31:31.039057 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-9fd9687b7-kdppr" podStartSLOduration=2.039023459 podStartE2EDuration="2.039023459s" podCreationTimestamp="2026-01-30 10:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:31.030739983 +0000 UTC m=+1195.597043817" watchObservedRunningTime="2026-01-30 10:31:31.039023459 +0000 UTC m=+1195.605327283" Jan 30 10:31:35 crc kubenswrapper[4984]: I0130 10:31:35.485711 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 30 10:31:35 crc kubenswrapper[4984]: I0130 10:31:35.562307 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb76cb6cb-wtx8d" podUID="d1c7d24e-f131-485d-aaec-80a94d7ddd96" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.084982 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerStarted","Data":"46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e"} Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.088592 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4q4x7" event={"ID":"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80","Type":"ContainerStarted","Data":"39ac005f3b0418711d3d897077b35efc4095cfe3b629a62736c2db0f861264f1"} Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.090953 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfzdw" event={"ID":"3048d738-67a2-417f-91ca-8993f4b557f1","Type":"ContainerStarted","Data":"f262460637877d4f5daeebd4c5ff5dbc2e5b82919bca6faedbbb9bbf414ca732"} Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.093563 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxnz6" event={"ID":"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1","Type":"ContainerStarted","Data":"71b37a694edb5502847d9b98becba6b55ffee4b768b800a7abda8cfa9dacfecb"} Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.114318 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4q4x7" podStartSLOduration=2.870584614 podStartE2EDuration="1m1.114298118s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="2026-01-30 10:30:37.869962052 +0000 UTC m=+1142.436265876" lastFinishedPulling="2026-01-30 10:31:36.113675556 +0000 UTC m=+1200.679979380" observedRunningTime="2026-01-30 10:31:37.106841725 +0000 UTC m=+1201.673145559" watchObservedRunningTime="2026-01-30 10:31:37.114298118 +0000 UTC m=+1201.680601952" Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.127914 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pxnz6" podStartSLOduration=3.5824867559999998 podStartE2EDuration="1m1.127896059s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="2026-01-30 10:30:38.284155581 +0000 UTC m=+1142.850459395" lastFinishedPulling="2026-01-30 10:31:35.829564874 +0000 UTC m=+1200.395868698" observedRunningTime="2026-01-30 10:31:37.126130381 +0000 UTC m=+1201.692434205" watchObservedRunningTime="2026-01-30 10:31:37.127896059 +0000 UTC m=+1201.694199883" Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.147217 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bfzdw" podStartSLOduration=3.373725286 podStartE2EDuration="1m1.147203295s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="2026-01-30 10:30:38.060007932 +0000 UTC m=+1142.626311756" lastFinishedPulling="2026-01-30 10:31:35.833485901 +0000 UTC m=+1200.399789765" observedRunningTime="2026-01-30 10:31:37.143838603 +0000 UTC m=+1201.710142417" watchObservedRunningTime="2026-01-30 10:31:37.147203295 +0000 UTC m=+1201.713507109" Jan 30 10:31:47 crc kubenswrapper[4984]: I0130 10:31:47.353965 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:31:47 crc kubenswrapper[4984]: I0130 10:31:47.375772 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.050414 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.131658 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.133207 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.207611 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon-log" containerID="cri-o://e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500" gracePeriod=30 Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.207767 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" containerID="cri-o://5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3" gracePeriod=30 Jan 30 10:31:51 crc kubenswrapper[4984]: E0130 10:31:51.794953 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233227 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerStarted","Data":"fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338"} Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233421 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="proxy-httpd" containerID="cri-o://fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338" gracePeriod=30 Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233468 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="ceilometer-notification-agent" containerID="cri-o://da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e" gracePeriod=30 Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233484 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="sg-core" containerID="cri-o://46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e" gracePeriod=30 Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233691 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.244612 4984 generic.go:334] "Generic (PLEG): container finished" podID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerID="fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338" exitCode=0 Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.245007 4984 generic.go:334] "Generic (PLEG): container finished" podID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerID="46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e" exitCode=2 Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.244717 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerDied","Data":"fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338"} Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.245070 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerDied","Data":"46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e"} Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.247368 4984 generic.go:334] "Generic (PLEG): container finished" podID="1238c32f-7644-4b33-8960-b97c64733162" containerID="5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3" exitCode=0 Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.247409 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerDied","Data":"5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3"} Jan 30 10:31:54 crc kubenswrapper[4984]: I0130 10:31:54.258629 4984 generic.go:334] "Generic (PLEG): container finished" podID="3048d738-67a2-417f-91ca-8993f4b557f1" containerID="f262460637877d4f5daeebd4c5ff5dbc2e5b82919bca6faedbbb9bbf414ca732" exitCode=0 Jan 30 10:31:54 crc kubenswrapper[4984]: I0130 10:31:54.258741 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfzdw" event={"ID":"3048d738-67a2-417f-91ca-8993f4b557f1","Type":"ContainerDied","Data":"f262460637877d4f5daeebd4c5ff5dbc2e5b82919bca6faedbbb9bbf414ca732"} Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.274305 4984 generic.go:334] "Generic (PLEG): container finished" podID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerID="da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e" exitCode=0 Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.274319 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerDied","Data":"da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e"} Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.276748 4984 generic.go:334] "Generic (PLEG): container finished" podID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" containerID="71b37a694edb5502847d9b98becba6b55ffee4b768b800a7abda8cfa9dacfecb" exitCode=0 Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.276797 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxnz6" event={"ID":"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1","Type":"ContainerDied","Data":"71b37a694edb5502847d9b98becba6b55ffee4b768b800a7abda8cfa9dacfecb"} Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.484579 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.520975 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618028 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618102 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618162 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618197 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618236 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618313 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618362 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618857 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfzdw" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.619903 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.620193 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.625595 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq" (OuterVolumeSpecName: "kube-api-access-2nltq") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "kube-api-access-2nltq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.630386 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts" (OuterVolumeSpecName: "scripts") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.653120 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.693297 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.713356 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data" (OuterVolumeSpecName: "config-data") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.719856 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.719908 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720090 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720135 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720192 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720578 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720595 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720604 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720612 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720621 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720628 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720638 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.721448 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs" (OuterVolumeSpecName: "logs") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.723420 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts" (OuterVolumeSpecName: "scripts") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.723579 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc" (OuterVolumeSpecName: "kube-api-access-8chzc") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "kube-api-access-8chzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.744675 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.752257 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data" (OuterVolumeSpecName: "config-data") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822507 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822552 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822565 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822577 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822590 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.290375 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfzdw" event={"ID":"3048d738-67a2-417f-91ca-8993f4b557f1","Type":"ContainerDied","Data":"8c1d0f7dc02303cf5bb0d029a247772d55790cec54bba645727d9dadb4e4bde2"} Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.290438 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c1d0f7dc02303cf5bb0d029a247772d55790cec54bba645727d9dadb4e4bde2" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.290520 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfzdw" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.303684 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.303925 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerDied","Data":"33596f8966073af30764199a2a914ff3e9f8caa7ca44b53b652d4e885a08aa2f"} Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.304027 4984 scope.go:117] "RemoveContainer" containerID="fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.407433 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.419868 4984 scope.go:117] "RemoveContainer" containerID="46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.424362 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431088 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68474f84b8-6pzwt"] Jan 30 10:31:56 crc kubenswrapper[4984]: E0130 10:31:56.431498 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="sg-core" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431516 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="sg-core" Jan 30 10:31:56 crc kubenswrapper[4984]: E0130 10:31:56.431530 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="proxy-httpd" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431536 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="proxy-httpd" Jan 30 10:31:56 crc kubenswrapper[4984]: E0130 10:31:56.431552 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="ceilometer-notification-agent" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431558 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="ceilometer-notification-agent" Jan 30 10:31:56 crc kubenswrapper[4984]: E0130 10:31:56.431582 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" containerName="placement-db-sync" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431588 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" containerName="placement-db-sync" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431746 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="ceilometer-notification-agent" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431803 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="proxy-httpd" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431824 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" containerName="placement-db-sync" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431842 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="sg-core" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.432801 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.435068 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.439766 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.439987 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.440144 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.440809 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gnpsj" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.441767 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.443713 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.449499 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.449693 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.454433 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68474f84b8-6pzwt"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.476202 4984 scope.go:117] "RemoveContainer" containerID="da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.488919 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538555 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbfxs\" (UniqueName: \"kubernetes.io/projected/34cd991a-90cf-410c-828d-db99caf6dcea-kube-api-access-nbfxs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538605 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538629 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-internal-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538678 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-public-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538702 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-config-data\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538729 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538782 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538814 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34cd991a-90cf-410c-828d-db99caf6dcea-logs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538848 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538873 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-scripts\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538908 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538946 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-combined-ca-bundle\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538974 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538994 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652677 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-public-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652779 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-config-data\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652844 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652951 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34cd991a-90cf-410c-828d-db99caf6dcea-logs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652996 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653022 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-scripts\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653073 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653113 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-combined-ca-bundle\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653153 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653198 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653290 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbfxs\" (UniqueName: \"kubernetes.io/projected/34cd991a-90cf-410c-828d-db99caf6dcea-kube-api-access-nbfxs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653822 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653873 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-internal-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.659063 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.667554 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-config-data\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.668565 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.673265 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34cd991a-90cf-410c-828d-db99caf6dcea-logs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.675494 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.675819 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-scripts\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.680689 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-combined-ca-bundle\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.680991 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.681943 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.682294 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-internal-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.685913 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.691230 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbfxs\" (UniqueName: \"kubernetes.io/projected/34cd991a-90cf-410c-828d-db99caf6dcea-kube-api-access-nbfxs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.695941 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-public-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.703082 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.728802 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.731206 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.772368 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.886395 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.961892 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") pod \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.962021 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") pod \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.962064 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") pod \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.969434 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" (UID: "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.969813 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j" (OuterVolumeSpecName: "kube-api-access-92h6j") pod "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" (UID: "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1"). InnerVolumeSpecName "kube-api-access-92h6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.043356 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" (UID: "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.065070 4984 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.065233 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.065274 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.183792 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68474f84b8-6pzwt"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.351508 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:57 crc kubenswrapper[4984]: W0130 10:31:57.363133 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f5ff484_b6c4_42ea_ae17_1b11c214f435.slice/crio-3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b WatchSource:0}: Error finding container 3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b: Status 404 returned error can't find the container with id 3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.364338 4984 generic.go:334] "Generic (PLEG): container finished" podID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" containerID="39ac005f3b0418711d3d897077b35efc4095cfe3b629a62736c2db0f861264f1" exitCode=0 Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.364423 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4q4x7" event={"ID":"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80","Type":"ContainerDied","Data":"39ac005f3b0418711d3d897077b35efc4095cfe3b629a62736c2db0f861264f1"} Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.376704 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.376719 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxnz6" event={"ID":"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1","Type":"ContainerDied","Data":"c150541fb16c40a06f1f4b6b64bfff01ebc0687acbccdc05a1f6c7f17f0d9920"} Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.376749 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c150541fb16c40a06f1f4b6b64bfff01ebc0687acbccdc05a1f6c7f17f0d9920" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.390530 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68474f84b8-6pzwt" event={"ID":"34cd991a-90cf-410c-828d-db99caf6dcea","Type":"ContainerStarted","Data":"64c0d3ab3a05e57909f45192a18006bb1483b72ac403a24ddf8b45b711cbbf43"} Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.499730 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-664bd6b5fc-shfjg"] Jan 30 10:31:57 crc kubenswrapper[4984]: E0130 10:31:57.500129 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" containerName="barbican-db-sync" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.500165 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" containerName="barbican-db-sync" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.500367 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" containerName="barbican-db-sync" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.501289 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.507345 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.507562 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dbvq5" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.507802 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.514211 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-664bd6b5fc-shfjg"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7bs\" (UniqueName: \"kubernetes.io/projected/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-kube-api-access-kn7bs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580681 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-combined-ca-bundle\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580701 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-logs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580747 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data-custom\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580767 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.586219 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-75ff98474b-zm29s"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.587574 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.593927 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.603691 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75ff98474b-zm29s"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.619807 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.621237 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.629029 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694524 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data-custom\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694550 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694597 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-combined-ca-bundle\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694656 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1368411d-c934-4d15-a67b-dc840dbe010d-logs\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694691 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn7bs\" (UniqueName: \"kubernetes.io/projected/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-kube-api-access-kn7bs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694711 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xwx\" (UniqueName: \"kubernetes.io/projected/1368411d-c934-4d15-a67b-dc840dbe010d-kube-api-access-55xwx\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694737 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data-custom\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694773 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-combined-ca-bundle\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694789 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-logs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.695405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-logs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.701654 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.702495 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-combined-ca-bundle\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.702868 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data-custom\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.703387 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.713584 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.720077 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.726422 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.744354 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn7bs\" (UniqueName: \"kubernetes.io/projected/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-kube-api-access-kn7bs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805409 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data-custom\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805480 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805505 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805561 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805588 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805668 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805716 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-combined-ca-bundle\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805761 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805852 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805890 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1368411d-c934-4d15-a67b-dc840dbe010d-logs\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805907 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805927 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805975 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805999 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.806015 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.806035 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55xwx\" (UniqueName: \"kubernetes.io/projected/1368411d-c934-4d15-a67b-dc840dbe010d-kube-api-access-55xwx\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.807384 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1368411d-c934-4d15-a67b-dc840dbe010d-logs\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.813152 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-combined-ca-bundle\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.817894 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data-custom\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.825890 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.835038 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55xwx\" (UniqueName: \"kubernetes.io/projected/1368411d-c934-4d15-a67b-dc840dbe010d-kube-api-access-55xwx\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.907999 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908080 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908119 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908202 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908261 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908441 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908588 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908648 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908672 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.909415 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.910433 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.910939 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.911417 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.911459 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.911561 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.916820 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.919163 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.924776 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.927079 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.930236 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.037897 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.066173 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.083684 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.092808 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.104020 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" path="/var/lib/kubelet/pods/092048c5-1cfe-40c2-a319-23dde30a6c80/volumes" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.409117 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68474f84b8-6pzwt" event={"ID":"34cd991a-90cf-410c-828d-db99caf6dcea","Type":"ContainerStarted","Data":"15f794d2d82829def16964874a24d8eba8ba29a4e58a6bb4e3e49826860e3b40"} Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.409521 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68474f84b8-6pzwt" event={"ID":"34cd991a-90cf-410c-828d-db99caf6dcea","Type":"ContainerStarted","Data":"1771be91f0f551b12866727a480e2180dc533c5cb1832ea0888f57d3c300d1ed"} Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.411206 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.411338 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.414778 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b"} Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.414818 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b"} Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.457214 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68474f84b8-6pzwt" podStartSLOduration=2.457196187 podStartE2EDuration="2.457196187s" podCreationTimestamp="2026-01-30 10:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:58.445648193 +0000 UTC m=+1223.011952027" watchObservedRunningTime="2026-01-30 10:31:58.457196187 +0000 UTC m=+1223.023500011" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.603281 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-664bd6b5fc-shfjg"] Jan 30 10:31:58 crc kubenswrapper[4984]: W0130 10:31:58.749849 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1368411d_c934_4d15_a67b_dc840dbe010d.slice/crio-efaf98c106eeacdfe49e3b1291a96438f842f320631c77485bbf992d96bc5d14 WatchSource:0}: Error finding container efaf98c106eeacdfe49e3b1291a96438f842f320631c77485bbf992d96bc5d14: Status 404 returned error can't find the container with id efaf98c106eeacdfe49e3b1291a96438f842f320631c77485bbf992d96bc5d14 Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.755447 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75ff98474b-zm29s"] Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.765584 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.859408 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.866796 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:31:58 crc kubenswrapper[4984]: W0130 10:31:58.881537 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04477670_b6dd_441f_909a_e6b56bf335d5.slice/crio-a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631 WatchSource:0}: Error finding container a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631: Status 404 returned error can't find the container with id a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631 Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.942960 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943132 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943201 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943301 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943431 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943489 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943513 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.944369 4984 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.947470 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts" (OuterVolumeSpecName: "scripts") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.949129 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp" (OuterVolumeSpecName: "kube-api-access-nrxhp") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "kube-api-access-nrxhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.951385 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.001769 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.021710 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data" (OuterVolumeSpecName: "config-data") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045802 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045836 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045848 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045858 4984 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045867 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.435714 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerStarted","Data":"8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.435971 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerStarted","Data":"ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.435981 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerStarted","Data":"a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.438076 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.438103 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.441304 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4q4x7" event={"ID":"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80","Type":"ContainerDied","Data":"bac16c50dbc54a56989a819b2fb558872bce9cd29de279c380d506e6e46a94f0"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.441333 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac16c50dbc54a56989a819b2fb558872bce9cd29de279c380d506e6e46a94f0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.441341 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.448569 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" event={"ID":"1368411d-c934-4d15-a67b-dc840dbe010d","Type":"ContainerStarted","Data":"efaf98c106eeacdfe49e3b1291a96438f842f320631c77485bbf992d96bc5d14"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.465215 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664bd6b5fc-shfjg" event={"ID":"aa6393c8-34de-43fc-9a00-a0f87b31d8e8","Type":"ContainerStarted","Data":"d99dde3d2e570bc78da84291eccae7818930b55d33c57c5d7b3f7af85875e25b"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.475439 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.475477 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.476997 4984 generic.go:334] "Generic (PLEG): container finished" podID="db250c1d-d110-46f5-ae22-46a1e507a922" containerID="3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37" exitCode=0 Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.477586 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerDied","Data":"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.477641 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerStarted","Data":"5804a7b25d440dcdbdeccb2de4750734b5adefbb2c8bb1ff519cee9eb3d7d6fa"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.504474 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-686dddff74-vgg85" podStartSLOduration=2.504455781 podStartE2EDuration="2.504455781s" podCreationTimestamp="2026-01-30 10:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:59.465066067 +0000 UTC m=+1224.031369901" watchObservedRunningTime="2026-01-30 10:31:59.504455781 +0000 UTC m=+1224.070759605" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.674425 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:31:59 crc kubenswrapper[4984]: E0130 10:31:59.674771 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" containerName="cinder-db-sync" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.674783 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" containerName="cinder-db-sync" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.674954 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" containerName="cinder-db-sync" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.675916 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.684149 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.684340 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-t4jkv" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.684451 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.684567 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.693731 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.775313 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.818645 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.820264 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.837837 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870494 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870570 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870598 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870616 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870668 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870710 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.972863 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.972967 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973187 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973233 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973285 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973321 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973343 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973392 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973423 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973445 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973479 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973529 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.977895 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.985403 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.986126 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.987643 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.987811 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.016360 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.018728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.020359 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.023972 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.034312 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.040418 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074616 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074710 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074760 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074795 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074835 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074857 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.075789 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.075960 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.076482 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.077073 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.077114 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.113430 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.174847 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.175925 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.175953 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176014 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176049 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176083 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176098 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277616 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277662 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277714 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277761 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277784 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277856 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277910 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.278310 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.279219 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.282493 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.282683 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.283070 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.289222 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.296227 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.454638 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:01 crc kubenswrapper[4984]: I0130 10:32:01.533997 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:32:01 crc kubenswrapper[4984]: I0130 10:32:01.904242 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.009483 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:32:02 crc kubenswrapper[4984]: W0130 10:32:02.037293 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2a86d1a_4829_4934_83dd_b52dc378a4cf.slice/crio-748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4 WatchSource:0}: Error finding container 748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4: Status 404 returned error can't find the container with id 748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.137536 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:02 crc kubenswrapper[4984]: W0130 10:32:02.141305 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcde56acd_942d_47dd_8417_8c92170502ce.slice/crio-32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8 WatchSource:0}: Error finding container 32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8: Status 404 returned error can't find the container with id 32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.538300 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerStarted","Data":"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.538427 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.538438 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="dnsmasq-dns" containerID="cri-o://49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" gracePeriod=10 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.547412 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerStarted","Data":"5d408605319c89d081b5548ebdb4c7ea288ca2bdefa7e08a28be726765947e9d"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.553242 4984 generic.go:334] "Generic (PLEG): container finished" podID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerID="e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec" exitCode=0 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.553306 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerDied","Data":"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.553326 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerStarted","Data":"748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.562299 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" podStartSLOduration=5.562283151 podStartE2EDuration="5.562283151s" podCreationTimestamp="2026-01-30 10:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:02.557765278 +0000 UTC m=+1227.124069112" watchObservedRunningTime="2026-01-30 10:32:02.562283151 +0000 UTC m=+1227.128586975" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.566315 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerStarted","Data":"32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.589135 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" event={"ID":"1368411d-c934-4d15-a67b-dc840dbe010d","Type":"ContainerStarted","Data":"c1dc024a8a372c30c6e909e2090d9ed9d87971cc75d538f887f6f0dc53951197"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.589225 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" event={"ID":"1368411d-c934-4d15-a67b-dc840dbe010d","Type":"ContainerStarted","Data":"2407eb0ad66869e23ece625c09789b110449d66ee9e08012369a3b9f50b5e63f"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.603454 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664bd6b5fc-shfjg" event={"ID":"aa6393c8-34de-43fc-9a00-a0f87b31d8e8","Type":"ContainerStarted","Data":"2897b1ba156a11bb223869a25cfb36e7d9bb72cd81afd6033d9097d57b33c578"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.603498 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664bd6b5fc-shfjg" event={"ID":"aa6393c8-34de-43fc-9a00-a0f87b31d8e8","Type":"ContainerStarted","Data":"750f154a519d1c25a2cb7fa7537cd7133032f283828a1c85195f09c5b402de7a"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624537 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624682 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-central-agent" containerID="cri-o://9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b" gracePeriod=30 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624860 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624900 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="proxy-httpd" containerID="cri-o://3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca" gracePeriod=30 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624940 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="sg-core" containerID="cri-o://3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631" gracePeriod=30 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624979 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-notification-agent" containerID="cri-o://96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00" gracePeriod=30 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.664960 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" podStartSLOduration=3.093723461 podStartE2EDuration="5.664936939s" podCreationTimestamp="2026-01-30 10:31:57 +0000 UTC" firstStartedPulling="2026-01-30 10:31:58.760979957 +0000 UTC m=+1223.327283781" lastFinishedPulling="2026-01-30 10:32:01.332193415 +0000 UTC m=+1225.898497259" observedRunningTime="2026-01-30 10:32:02.615007888 +0000 UTC m=+1227.181311712" watchObservedRunningTime="2026-01-30 10:32:02.664936939 +0000 UTC m=+1227.231240763" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.666961 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-664bd6b5fc-shfjg" podStartSLOduration=2.922804882 podStartE2EDuration="5.666957134s" podCreationTimestamp="2026-01-30 10:31:57 +0000 UTC" firstStartedPulling="2026-01-30 10:31:58.618587236 +0000 UTC m=+1223.184891060" lastFinishedPulling="2026-01-30 10:32:01.362739478 +0000 UTC m=+1225.929043312" observedRunningTime="2026-01-30 10:32:02.656092588 +0000 UTC m=+1227.222396412" watchObservedRunningTime="2026-01-30 10:32:02.666957134 +0000 UTC m=+1227.233260958" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.713463 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.717112622 podStartE2EDuration="6.713444911s" podCreationTimestamp="2026-01-30 10:31:56 +0000 UTC" firstStartedPulling="2026-01-30 10:31:57.365885444 +0000 UTC m=+1221.932189268" lastFinishedPulling="2026-01-30 10:32:01.362217733 +0000 UTC m=+1225.928521557" observedRunningTime="2026-01-30 10:32:02.693659962 +0000 UTC m=+1227.259963786" watchObservedRunningTime="2026-01-30 10:32:02.713444911 +0000 UTC m=+1227.279748735" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.034003 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166419 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166531 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166575 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166641 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166679 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166700 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.176882 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv" (OuterVolumeSpecName: "kube-api-access-4hjwv") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "kube-api-access-4hjwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.238621 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.269743 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.269783 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.275451 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.284497 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config" (OuterVolumeSpecName: "config") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.291656 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.335480 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.355147 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 10:32:03 crc kubenswrapper[4984]: E0130 10:32:03.355783 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="dnsmasq-dns" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.355801 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="dnsmasq-dns" Jan 30 10:32:03 crc kubenswrapper[4984]: E0130 10:32:03.355817 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="init" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.355822 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="init" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.356122 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="dnsmasq-dns" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.356844 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.373831 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.375720 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vkmk4" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.375919 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.376150 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.379333 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.379378 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.379389 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.379397 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.492343 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-combined-ca-bundle\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.492463 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.492495 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config-secret\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.492638 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcj67\" (UniqueName: \"kubernetes.io/projected/141e094b-e8c8-4a61-b93c-8dec5ac89823-kube-api-access-gcj67\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.594272 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-combined-ca-bundle\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.594396 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.594433 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config-secret\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.594530 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcj67\" (UniqueName: \"kubernetes.io/projected/141e094b-e8c8-4a61-b93c-8dec5ac89823-kube-api-access-gcj67\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.598457 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.608060 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config-secret\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.609390 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-combined-ca-bundle\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.612933 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcj67\" (UniqueName: \"kubernetes.io/projected/141e094b-e8c8-4a61-b93c-8dec5ac89823-kube-api-access-gcj67\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657184 4984 generic.go:334] "Generic (PLEG): container finished" podID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerID="3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca" exitCode=0 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657281 4984 generic.go:334] "Generic (PLEG): container finished" podID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerID="3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631" exitCode=2 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657289 4984 generic.go:334] "Generic (PLEG): container finished" podID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerID="96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00" exitCode=0 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657296 4984 generic.go:334] "Generic (PLEG): container finished" podID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerID="9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b" exitCode=0 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657373 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657468 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657512 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657524 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657538 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657549 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.669854 4984 generic.go:334] "Generic (PLEG): container finished" podID="db250c1d-d110-46f5-ae22-46a1e507a922" containerID="49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" exitCode=0 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.669943 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerDied","Data":"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.669968 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerDied","Data":"5804a7b25d440dcdbdeccb2de4750734b5adefbb2c8bb1ff519cee9eb3d7d6fa"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.669985 4984 scope.go:117] "RemoveContainer" containerID="49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.670111 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.676334 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerStarted","Data":"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.685187 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerStarted","Data":"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.685869 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.686702 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.716495 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" podStartSLOduration=4.716476519 podStartE2EDuration="4.716476519s" podCreationTimestamp="2026-01-30 10:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:03.708159113 +0000 UTC m=+1228.274462937" watchObservedRunningTime="2026-01-30 10:32:03.716476519 +0000 UTC m=+1228.282780343" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.741673 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.744297 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.755647 4984 scope.go:117] "RemoveContainer" containerID="3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.759731 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.766776 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797474 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797614 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797681 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797713 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797782 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797875 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.798010 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.801218 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.802338 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.803592 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6" (OuterVolumeSpecName: "kube-api-access-kgvw6") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "kube-api-access-kgvw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.804505 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts" (OuterVolumeSpecName: "scripts") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.804548 4984 scope.go:117] "RemoveContainer" containerID="49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" Jan 30 10:32:03 crc kubenswrapper[4984]: E0130 10:32:03.805031 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228\": container with ID starting with 49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228 not found: ID does not exist" containerID="49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.805074 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228"} err="failed to get container status \"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228\": rpc error: code = NotFound desc = could not find container \"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228\": container with ID starting with 49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228 not found: ID does not exist" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.805101 4984 scope.go:117] "RemoveContainer" containerID="3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37" Jan 30 10:32:03 crc kubenswrapper[4984]: E0130 10:32:03.805420 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37\": container with ID starting with 3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37 not found: ID does not exist" containerID="3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.805449 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37"} err="failed to get container status \"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37\": rpc error: code = NotFound desc = could not find container \"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37\": container with ID starting with 3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37 not found: ID does not exist" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.902396 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.902678 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.902763 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.902836 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.903634 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.903883 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.934799 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data" (OuterVolumeSpecName: "config-data") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.004509 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.004893 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.004911 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.108384 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" path="/var/lib/kubelet/pods/db250c1d-d110-46f5-ae22-46a1e507a922/volumes" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.386419 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.473801 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cfd8d5fd8-lwgk4"] Jan 30 10:32:04 crc kubenswrapper[4984]: E0130 10:32:04.474174 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-notification-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474193 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-notification-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: E0130 10:32:04.474207 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="proxy-httpd" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474214 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="proxy-httpd" Jan 30 10:32:04 crc kubenswrapper[4984]: E0130 10:32:04.474230 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="sg-core" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474237 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="sg-core" Jan 30 10:32:04 crc kubenswrapper[4984]: E0130 10:32:04.474276 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-central-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474283 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-central-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474469 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-central-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474491 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="sg-core" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474505 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-notification-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474519 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="proxy-httpd" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.475419 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.477766 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.478224 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.515083 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cfd8d5fd8-lwgk4"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.619809 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-internal-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620160 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx446\" (UniqueName: \"kubernetes.io/projected/217935e2-7a1e-44a6-b6fd-e64c41155d6d-kube-api-access-vx446\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620186 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data-custom\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620256 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620432 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-combined-ca-bundle\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620601 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217935e2-7a1e-44a6-b6fd-e64c41155d6d-logs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620851 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-public-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.697912 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"141e094b-e8c8-4a61-b93c-8dec5ac89823","Type":"ContainerStarted","Data":"36554c84eaa2f6d62c6a0a85214521f8ab2e6261e30a6786f151de4f6c895299"} Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.701842 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerStarted","Data":"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6"} Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.702011 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api-log" containerID="cri-o://46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" gracePeriod=30 Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.702587 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api" containerID="cri-o://88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" gracePeriod=30 Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.702743 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.711464 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerStarted","Data":"1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e"} Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.711556 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725088 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217935e2-7a1e-44a6-b6fd-e64c41155d6d-logs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725282 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-public-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725369 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-internal-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725388 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx446\" (UniqueName: \"kubernetes.io/projected/217935e2-7a1e-44a6-b6fd-e64c41155d6d-kube-api-access-vx446\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725616 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data-custom\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725693 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725744 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-combined-ca-bundle\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.726680 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217935e2-7a1e-44a6-b6fd-e64c41155d6d-logs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.729309 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.729292643 podStartE2EDuration="5.729292643s" podCreationTimestamp="2026-01-30 10:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:04.728664676 +0000 UTC m=+1229.294968500" watchObservedRunningTime="2026-01-30 10:32:04.729292643 +0000 UTC m=+1229.295596467" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.743323 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-combined-ca-bundle\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.743751 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-internal-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.746509 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data-custom\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.747051 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.747746 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx446\" (UniqueName: \"kubernetes.io/projected/217935e2-7a1e-44a6-b6fd-e64c41155d6d-kube-api-access-vx446\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.782809 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-public-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.797428 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.812040 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.828224 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.834684 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.836794 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.839138 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.859867 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.882036 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960460 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960566 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960656 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960682 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960759 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960784 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063682 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063842 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063868 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063901 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063936 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063953 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.064006 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.064907 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.069037 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.073831 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.075839 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.081692 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.083170 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.091029 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.206073 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.400354 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cfd8d5fd8-lwgk4"] Jan 30 10:32:05 crc kubenswrapper[4984]: W0130 10:32:05.480167 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217935e2_7a1e_44a6_b6fd_e64c41155d6d.slice/crio-f3264032bb1cf0d01ca06d61d78c0db545c6f8424f8c1ae6ca67e1e940cea80e WatchSource:0}: Error finding container f3264032bb1cf0d01ca06d61d78c0db545c6f8424f8c1ae6ca67e1e940cea80e: Status 404 returned error can't find the container with id f3264032bb1cf0d01ca06d61d78c0db545c6f8424f8c1ae6ca67e1e940cea80e Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.484215 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.536947 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.579813 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.580733 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.580817 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.580885 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.580916 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.581054 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.581122 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.583160 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs" (OuterVolumeSpecName: "logs") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.583867 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.598011 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts" (OuterVolumeSpecName: "scripts") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.599445 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8" (OuterVolumeSpecName: "kube-api-access-fsrk8") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "kube-api-access-fsrk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.612642 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.633221 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.663350 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data" (OuterVolumeSpecName: "config-data") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684240 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684298 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684314 4984 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684322 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684331 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684759 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684912 4984 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742651 4984 generic.go:334] "Generic (PLEG): container finished" podID="856d75b5-d459-46da-99d3-123ebe89a26d" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" exitCode=0 Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742681 4984 generic.go:334] "Generic (PLEG): container finished" podID="856d75b5-d459-46da-99d3-123ebe89a26d" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" exitCode=143 Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742727 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerDied","Data":"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742751 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerDied","Data":"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742762 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerDied","Data":"5d408605319c89d081b5548ebdb4c7ea288ca2bdefa7e08a28be726765947e9d"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742777 4984 scope.go:117] "RemoveContainer" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742895 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.757579 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" event={"ID":"217935e2-7a1e-44a6-b6fd-e64c41155d6d","Type":"ContainerStarted","Data":"f3264032bb1cf0d01ca06d61d78c0db545c6f8424f8c1ae6ca67e1e940cea80e"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.774330 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.786715 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerStarted","Data":"26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.789313 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.800939 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.811083 4984 scope.go:117] "RemoveContainer" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815079 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: E0130 10:32:05.815566 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815585 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api" Jan 30 10:32:05 crc kubenswrapper[4984]: E0130 10:32:05.815603 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api-log" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815611 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api-log" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815835 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api-log" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815859 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.817467 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.821898 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.954574742 podStartE2EDuration="6.821876301s" podCreationTimestamp="2026-01-30 10:31:59 +0000 UTC" firstStartedPulling="2026-01-30 10:32:02.153910161 +0000 UTC m=+1226.720213985" lastFinishedPulling="2026-01-30 10:32:03.02121172 +0000 UTC m=+1227.587515544" observedRunningTime="2026-01-30 10:32:05.816802343 +0000 UTC m=+1230.383106167" watchObservedRunningTime="2026-01-30 10:32:05.821876301 +0000 UTC m=+1230.388180125" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.823767 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.824126 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.827740 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.858889 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.875060 4984 scope.go:117] "RemoveContainer" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" Jan 30 10:32:05 crc kubenswrapper[4984]: E0130 10:32:05.883412 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": container with ID starting with 88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6 not found: ID does not exist" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.883454 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6"} err="failed to get container status \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": rpc error: code = NotFound desc = could not find container \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": container with ID starting with 88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6 not found: ID does not exist" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.883480 4984 scope.go:117] "RemoveContainer" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" Jan 30 10:32:05 crc kubenswrapper[4984]: E0130 10:32:05.883813 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": container with ID starting with 46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0 not found: ID does not exist" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.883870 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0"} err="failed to get container status \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": rpc error: code = NotFound desc = could not find container \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": container with ID starting with 46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0 not found: ID does not exist" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.883897 4984 scope.go:117] "RemoveContainer" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.884289 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6"} err="failed to get container status \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": rpc error: code = NotFound desc = could not find container \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": container with ID starting with 88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6 not found: ID does not exist" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.884601 4984 scope.go:117] "RemoveContainer" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.888389 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0"} err="failed to get container status \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": rpc error: code = NotFound desc = could not find container \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": container with ID starting with 46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0 not found: ID does not exist" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894362 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8d6abba-9a6d-4a99-a68b-659c1e111893-logs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894406 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894475 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8d6abba-9a6d-4a99-a68b-659c1e111893-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894505 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data-custom\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894604 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h8xt\" (UniqueName: \"kubernetes.io/projected/a8d6abba-9a6d-4a99-a68b-659c1e111893-kube-api-access-4h8xt\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894648 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-scripts\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894709 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894742 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894776 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996630 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h8xt\" (UniqueName: \"kubernetes.io/projected/a8d6abba-9a6d-4a99-a68b-659c1e111893-kube-api-access-4h8xt\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996667 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-scripts\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996704 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996725 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996757 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996805 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8d6abba-9a6d-4a99-a68b-659c1e111893-logs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996819 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996875 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8d6abba-9a6d-4a99-a68b-659c1e111893-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996896 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data-custom\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.998001 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8d6abba-9a6d-4a99-a68b-659c1e111893-logs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.998775 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8d6abba-9a6d-4a99-a68b-659c1e111893-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.001771 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data-custom\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.002331 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.004710 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.004788 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.013006 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-scripts\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.013620 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.030331 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h8xt\" (UniqueName: \"kubernetes.io/projected/a8d6abba-9a6d-4a99-a68b-659c1e111893-kube-api-access-4h8xt\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.112498 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" path="/var/lib/kubelet/pods/3f5ff484-b6c4-42ea-ae17-1b11c214f435/volumes" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.113640 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" path="/var/lib/kubelet/pods/856d75b5-d459-46da-99d3-123ebe89a26d/volumes" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.160905 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.639324 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.802284 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.802333 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"38638789db9d29c3ef911b6de9f957b454b4ebfc3c25c50089b77550538df8d3"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.803837 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a8d6abba-9a6d-4a99-a68b-659c1e111893","Type":"ContainerStarted","Data":"07efaa1e3ddf1f9bed633f866e97d57d0a62927318cdc459f3e2924950215643"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.808508 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" event={"ID":"217935e2-7a1e-44a6-b6fd-e64c41155d6d","Type":"ContainerStarted","Data":"2d9b322a61d7e7703203e888c055e0be29b8fc16e5677e90111c7155f707dadd"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.808539 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" event={"ID":"217935e2-7a1e-44a6-b6fd-e64c41155d6d","Type":"ContainerStarted","Data":"2a040a81aefde9307ef906faa88a260b742855fb6023e6f2987ffdd59efd5fa8"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.838197 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" podStartSLOduration=2.838178901 podStartE2EDuration="2.838178901s" podCreationTimestamp="2026-01-30 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:06.826708118 +0000 UTC m=+1231.393011952" watchObservedRunningTime="2026-01-30 10:32:06.838178901 +0000 UTC m=+1231.404482725" Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.830769 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a8d6abba-9a6d-4a99-a68b-659c1e111893","Type":"ContainerStarted","Data":"98af5f2d73575000638d0a726277b59148b003349f5e37c0cf0923dfe0121884"} Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.833525 4984 generic.go:334] "Generic (PLEG): container finished" podID="2405c6ec-2510-4786-a602-ae85d358ed1f" containerID="886c26fc093739c495beed5c6f76e0e1f2d0d794ded30c68297ca382924af529" exitCode=0 Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.833578 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5hx59" event={"ID":"2405c6ec-2510-4786-a602-ae85d358ed1f","Type":"ContainerDied","Data":"886c26fc093739c495beed5c6f76e0e1f2d0d794ded30c68297ca382924af529"} Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.836717 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5"} Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.836788 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.836821 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:08 crc kubenswrapper[4984]: I0130 10:32:08.845832 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a8d6abba-9a6d-4a99-a68b-659c1e111893","Type":"ContainerStarted","Data":"bd242f1bd39cc553d6bcdfef5b3dfa3a5cac86dad44d22bc1b8164f58c4f7dc3"} Jan 30 10:32:08 crc kubenswrapper[4984]: I0130 10:32:08.847322 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 10:32:08 crc kubenswrapper[4984]: I0130 10:32:08.851048 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b"} Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.232727 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5hx59" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.255716 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.255695436 podStartE2EDuration="4.255695436s" podCreationTimestamp="2026-01-30 10:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:08.868732679 +0000 UTC m=+1233.435036513" watchObservedRunningTime="2026-01-30 10:32:09.255695436 +0000 UTC m=+1233.821999270" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.264974 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") pod \"2405c6ec-2510-4786-a602-ae85d358ed1f\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.265300 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") pod \"2405c6ec-2510-4786-a602-ae85d358ed1f\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.265505 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") pod \"2405c6ec-2510-4786-a602-ae85d358ed1f\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.271080 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj" (OuterVolumeSpecName: "kube-api-access-75vlj") pod "2405c6ec-2510-4786-a602-ae85d358ed1f" (UID: "2405c6ec-2510-4786-a602-ae85d358ed1f"). InnerVolumeSpecName "kube-api-access-75vlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.307852 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2405c6ec-2510-4786-a602-ae85d358ed1f" (UID: "2405c6ec-2510-4786-a602-ae85d358ed1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.311383 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config" (OuterVolumeSpecName: "config") pod "2405c6ec-2510-4786-a602-ae85d358ed1f" (UID: "2405c6ec-2510-4786-a602-ae85d358ed1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.366688 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.366727 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.366737 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.534125 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.865417 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5hx59" event={"ID":"2405c6ec-2510-4786-a602-ae85d358ed1f","Type":"ContainerDied","Data":"b22cfaa6ea4686fc0571245806e8e06ec7680b75dec3155d20471ab3af1337c6"} Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.865777 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22cfaa6ea4686fc0571245806e8e06ec7680b75dec3155d20471ab3af1337c6" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.865472 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5hx59" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.051041 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.159809 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.166882 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.167174 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="dnsmasq-dns" containerID="cri-o://b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" gracePeriod=10 Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.169502 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.205403 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:32:10 crc kubenswrapper[4984]: E0130 10:32:10.205816 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2405c6ec-2510-4786-a602-ae85d358ed1f" containerName="neutron-db-sync" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.205834 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2405c6ec-2510-4786-a602-ae85d358ed1f" containerName="neutron-db-sync" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.205993 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2405c6ec-2510-4786-a602-ae85d358ed1f" containerName="neutron-db-sync" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.206989 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.289753 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.380036 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.390215 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.394797 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.394957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.394988 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.395010 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.395040 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.395479 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.401730 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.410732 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.411131 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.411446 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.411752 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7t44v" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498599 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498638 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498660 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498685 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498735 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498765 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498793 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498812 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498832 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498850 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498888 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.500007 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.502974 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.503515 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.504357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.505000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.509641 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.528312 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600542 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600784 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600828 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600864 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.605641 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.609432 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.609542 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.610324 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.619779 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.745556 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.828473 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.894911 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.905026 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8"} Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.907321 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.930857 4984 generic.go:334] "Generic (PLEG): container finished" podID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerID="b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" exitCode=0 Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.931785 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.931916 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerDied","Data":"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273"} Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.935705 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerDied","Data":"748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4"} Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.935844 4984 scope.go:117] "RemoveContainer" containerID="b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.976952 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.264728106 podStartE2EDuration="6.976936618s" podCreationTimestamp="2026-01-30 10:32:04 +0000 UTC" firstStartedPulling="2026-01-30 10:32:05.823685131 +0000 UTC m=+1230.389988955" lastFinishedPulling="2026-01-30 10:32:09.535893643 +0000 UTC m=+1234.102197467" observedRunningTime="2026-01-30 10:32:10.945610514 +0000 UTC m=+1235.511914338" watchObservedRunningTime="2026-01-30 10:32:10.976936618 +0000 UTC m=+1235.543240442" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.999736 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.008915 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009027 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009055 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009096 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009121 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009257 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.011137 4984 scope.go:117] "RemoveContainer" containerID="e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.028484 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm" (OuterVolumeSpecName: "kube-api-access-jdxzm") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "kube-api-access-jdxzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.064919 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.068579 4984 scope.go:117] "RemoveContainer" containerID="b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" Jan 30 10:32:11 crc kubenswrapper[4984]: E0130 10:32:11.069372 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273\": container with ID starting with b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273 not found: ID does not exist" containerID="b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.069415 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273"} err="failed to get container status \"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273\": rpc error: code = NotFound desc = could not find container \"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273\": container with ID starting with b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273 not found: ID does not exist" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.069440 4984 scope.go:117] "RemoveContainer" containerID="e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec" Jan 30 10:32:11 crc kubenswrapper[4984]: E0130 10:32:11.071147 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec\": container with ID starting with e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec not found: ID does not exist" containerID="e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.071187 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec"} err="failed to get container status \"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec\": rpc error: code = NotFound desc = could not find container \"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec\": container with ID starting with e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec not found: ID does not exist" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.080755 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.087297 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.092759 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config" (OuterVolumeSpecName: "config") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.105134 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113378 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113434 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113446 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113455 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113484 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113495 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.273367 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.284420 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.421666 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.447704 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.939928 4984 generic.go:334] "Generic (PLEG): container finished" podID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerID="56bcdf99f2e8704de387e7830f17377f1640401317904a569f1e4bd023c74298" exitCode=0 Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.940151 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerDied","Data":"56bcdf99f2e8704de387e7830f17377f1640401317904a569f1e4bd023c74298"} Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.940300 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerStarted","Data":"944e20dd436d6475eadb44cdbbb965933e9f99f6729e1968115646dd8b334bc1"} Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.952373 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="cinder-scheduler" containerID="cri-o://1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e" gracePeriod=30 Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.952671 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerStarted","Data":"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a"} Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.952709 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerStarted","Data":"726ba7faaff55c103e2271e253ff0f17293623696cf0c95eaae899332787dccc"} Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.952887 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="probe" containerID="cri-o://26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef" gracePeriod=30 Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.109999 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" path="/var/lib/kubelet/pods/d2a86d1a-4829-4934-83dd-b52dc378a4cf/volumes" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.136367 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5565c8d7-xqnh6"] Jan 30 10:32:12 crc kubenswrapper[4984]: E0130 10:32:12.136869 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="init" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.136891 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="init" Jan 30 10:32:12 crc kubenswrapper[4984]: E0130 10:32:12.136918 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="dnsmasq-dns" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.136926 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="dnsmasq-dns" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.137142 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="dnsmasq-dns" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.138368 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.142855 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.148237 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.163057 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5565c8d7-xqnh6"] Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242290 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-public-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242339 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-internal-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242424 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242490 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-httpd-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242546 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-ovndb-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242621 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hvqb\" (UniqueName: \"kubernetes.io/projected/0e442774-b2c1-418a-a5b2-edfd20f23c27-kube-api-access-8hvqb\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242637 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-combined-ca-bundle\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.273003 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-77f6d8f475-hmb99"] Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.276870 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.279816 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.280205 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.280292 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.297530 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77f6d8f475-hmb99"] Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.343874 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-ovndb-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344231 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hvqb\" (UniqueName: \"kubernetes.io/projected/0e442774-b2c1-418a-a5b2-edfd20f23c27-kube-api-access-8hvqb\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-combined-ca-bundle\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344414 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-public-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344442 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-internal-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344485 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344549 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-httpd-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.405993 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-public-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.408444 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-ovndb-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.410691 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-combined-ca-bundle\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.412690 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.416668 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hvqb\" (UniqueName: \"kubernetes.io/projected/0e442774-b2c1-418a-a5b2-edfd20f23c27-kube-api-access-8hvqb\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.417749 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-httpd-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.425371 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-internal-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446392 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-log-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446450 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9g9b\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-kube-api-access-l9g9b\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446487 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-internal-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446543 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-combined-ca-bundle\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446570 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-public-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-config-data\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446618 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-run-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446674 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-etc-swift\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.484538 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549178 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-run-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549264 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-etc-swift\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549373 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-log-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549389 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9g9b\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-kube-api-access-l9g9b\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549417 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-internal-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549443 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-combined-ca-bundle\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549463 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-public-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549479 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-config-data\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.550015 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-run-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.575661 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-log-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.576392 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-internal-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.577221 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-combined-ca-bundle\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.580327 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-etc-swift\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.583660 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-public-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.584483 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9g9b\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-kube-api-access-l9g9b\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.586498 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-config-data\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.834064 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.963139 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerStarted","Data":"98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0"} Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.966881 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerStarted","Data":"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08"} Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.967017 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.968903 4984 generic.go:334] "Generic (PLEG): container finished" podID="cde56acd-942d-47dd-8417-8c92170502ce" containerID="26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef" exitCode=0 Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.969814 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerDied","Data":"26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef"} Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.997800 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" podStartSLOduration=2.99777891 podStartE2EDuration="2.99777891s" podCreationTimestamp="2026-01-30 10:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:12.985470938 +0000 UTC m=+1237.551774752" watchObservedRunningTime="2026-01-30 10:32:12.99777891 +0000 UTC m=+1237.564082744" Jan 30 10:32:13 crc kubenswrapper[4984]: I0130 10:32:13.016166 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-599cd9b588-9ll76" podStartSLOduration=3.016147465 podStartE2EDuration="3.016147465s" podCreationTimestamp="2026-01-30 10:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:13.004713407 +0000 UTC m=+1237.571017221" watchObservedRunningTime="2026-01-30 10:32:13.016147465 +0000 UTC m=+1237.582451289" Jan 30 10:32:13 crc kubenswrapper[4984]: I0130 10:32:13.978450 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.291142 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.293659 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-central-agent" containerID="cri-o://2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307" gracePeriod=30 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.293704 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="proxy-httpd" containerID="cri-o://bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8" gracePeriod=30 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.293737 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="sg-core" containerID="cri-o://945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b" gracePeriod=30 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.293737 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-notification-agent" containerID="cri-o://0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5" gracePeriod=30 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990014 4984 generic.go:334] "Generic (PLEG): container finished" podID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerID="bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8" exitCode=0 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990056 4984 generic.go:334] "Generic (PLEG): container finished" podID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerID="945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b" exitCode=2 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990071 4984 generic.go:334] "Generic (PLEG): container finished" podID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerID="0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5" exitCode=0 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990081 4984 generic.go:334] "Generic (PLEG): container finished" podID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerID="2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307" exitCode=0 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990096 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8"} Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990148 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b"} Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5"} Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990171 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307"} Jan 30 10:32:15 crc kubenswrapper[4984]: I0130 10:32:15.486043 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 30 10:32:15 crc kubenswrapper[4984]: I0130 10:32:15.486183 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.003038 4984 generic.go:334] "Generic (PLEG): container finished" podID="cde56acd-942d-47dd-8417-8c92170502ce" containerID="1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e" exitCode=0 Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.003125 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerDied","Data":"1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e"} Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.336218 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.683035 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.771220 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.771447 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-686dddff74-vgg85" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" containerID="cri-o://ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36" gracePeriod=30 Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.772049 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-686dddff74-vgg85" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" containerID="cri-o://8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607" gracePeriod=30 Jan 30 10:32:17 crc kubenswrapper[4984]: I0130 10:32:17.023407 4984 generic.go:334] "Generic (PLEG): container finished" podID="04477670-b6dd-441f-909a-e6b56bf335d5" containerID="ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36" exitCode=143 Jan 30 10:32:17 crc kubenswrapper[4984]: I0130 10:32:17.023477 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerDied","Data":"ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36"} Jan 30 10:32:18 crc kubenswrapper[4984]: I0130 10:32:18.527374 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.664118 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.665115 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-log" containerID="cri-o://b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540" gracePeriod=30 Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.665508 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-httpd" containerID="cri-o://f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b" gracePeriod=30 Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.947841 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-686dddff74-vgg85" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:36198->10.217.0.162:9311: read: connection reset by peer" Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.948171 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-686dddff74-vgg85" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:36194->10.217.0.162:9311: read: connection reset by peer" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.023038 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.069453 4984 generic.go:334] "Generic (PLEG): container finished" podID="1238c32f-7644-4b33-8960-b97c64733162" containerID="e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500" exitCode=137 Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.069766 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerDied","Data":"e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500"} Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.083629 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.083684 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerDied","Data":"32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8"} Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.083756 4984 scope.go:117] "RemoveContainer" containerID="26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.100290 4984 generic.go:334] "Generic (PLEG): container finished" podID="04477670-b6dd-441f-909a-e6b56bf335d5" containerID="8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607" exitCode=0 Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.107182 4984 generic.go:334] "Generic (PLEG): container finished" podID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerID="b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540" exitCode=143 Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.129923 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerDied","Data":"8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607"} Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.129964 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerDied","Data":"b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540"} Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.144574 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.145485 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.164925 4984 scope.go:117] "RemoveContainer" containerID="1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201439 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201543 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201632 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201708 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201804 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201829 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.204526 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.211663 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.213431 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts" (OuterVolumeSpecName: "scripts") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.213766 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq" (OuterVolumeSpecName: "kube-api-access-4ggvq") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "kube-api-access-4ggvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.294624 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.307817 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.307869 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.307936 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308646 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308750 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308804 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308837 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309071 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309096 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309121 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309169 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309196 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309217 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309267 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309676 4984 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309694 4984 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309703 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309712 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309720 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308651 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.312854 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs" (OuterVolumeSpecName: "logs") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.313515 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts" (OuterVolumeSpecName: "scripts") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.313552 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.321411 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq" (OuterVolumeSpecName: "kube-api-access-hsxfq") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "kube-api-access-hsxfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.321827 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.329978 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx" (OuterVolumeSpecName: "kube-api-access-4zvlx") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "kube-api-access-4zvlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.346712 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data" (OuterVolumeSpecName: "config-data") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.360420 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts" (OuterVolumeSpecName: "scripts") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.372403 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.384323 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.388195 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77f6d8f475-hmb99"] Jan 30 10:32:20 crc kubenswrapper[4984]: W0130 10:32:20.390691 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda88ca399_adf6_4df4_8216_84de7603712b.slice/crio-38aab2f34b741facacb894313a9e0ed1f396db233e4899a60dcd8a85041c0cd4 WatchSource:0}: Error finding container 38aab2f34b741facacb894313a9e0ed1f396db233e4899a60dcd8a85041c0cd4: Status 404 returned error can't find the container with id 38aab2f34b741facacb894313a9e0ed1f396db233e4899a60dcd8a85041c0cd4 Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.397299 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414579 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414621 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414631 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414640 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414648 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414656 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414665 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414674 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414683 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414692 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414700 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.415187 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data" (OuterVolumeSpecName: "config-data") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.437881 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.461618 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.487093 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data" (OuterVolumeSpecName: "config-data") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516181 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516298 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516320 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516527 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516556 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516904 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516917 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516926 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516933 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.520515 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8" (OuterVolumeSpecName: "kube-api-access-jmcx8") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "kube-api-access-jmcx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.521038 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs" (OuterVolumeSpecName: "logs") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.527527 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.561778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.584309 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data" (OuterVolumeSpecName: "config-data") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593295 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593638 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="proxy-httpd" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593654 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="proxy-httpd" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593667 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="probe" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593673 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="probe" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593686 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593692 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593701 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593706 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593718 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="cinder-scheduler" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593724 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="cinder-scheduler" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593737 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-notification-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593743 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-notification-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593754 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="sg-core" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593761 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="sg-core" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593771 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593776 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593787 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon-log" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593793 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon-log" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593805 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-central-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593810 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-central-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593970 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593980 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="proxy-httpd" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594050 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594065 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="sg-core" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594073 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon-log" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594081 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594087 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="probe" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594099 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-central-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594106 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="cinder-scheduler" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594114 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-notification-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594641 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618117 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618158 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618169 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618178 4984 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618187 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.642085 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.696151 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.697444 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.704238 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.719364 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.719529 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.740279 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.750103 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.760616 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.762016 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.764072 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.792904 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.814656 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.815878 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.819161 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.821549 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.821588 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.821624 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.821697 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.822537 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.834033 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.839064 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.841293 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.925442 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.925876 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbkk\" (UniqueName: \"kubernetes.io/projected/4ced7140-d346-43c7-9139-7f460af079e2-kube-api-access-qnbkk\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.925914 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.925965 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.926004 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.926580 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ced7140-d346-43c7-9139-7f460af079e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.926641 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.926846 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.928392 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.930693 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.930894 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.944227 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.963699 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.980325 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.982510 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.022588 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.022816 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="dnsmasq-dns" containerID="cri-o://f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" gracePeriod=10 Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.032924 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ced7140-d346-43c7-9139-7f460af079e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.032963 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.032994 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033044 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033091 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033130 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbkk\" (UniqueName: \"kubernetes.io/projected/4ced7140-d346-43c7-9139-7f460af079e2-kube-api-access-qnbkk\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033171 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033192 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.036154 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ced7140-d346-43c7-9139-7f460af079e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.036894 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.040955 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.050978 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.053482 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: W0130 10:32:21.053728 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e442774_b2c1_418a_a5b2_edfd20f23c27.slice/crio-e7a2847c527c57f9285377dc7aef600061e22c381151ad6b1f951f89ceabf05c WatchSource:0}: Error finding container e7a2847c527c57f9285377dc7aef600061e22c381151ad6b1f951f89ceabf05c: Status 404 returned error can't find the container with id e7a2847c527c57f9285377dc7aef600061e22c381151ad6b1f951f89ceabf05c Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.054823 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbkk\" (UniqueName: \"kubernetes.io/projected/4ced7140-d346-43c7-9139-7f460af079e2-kube-api-access-qnbkk\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.056118 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.056781 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.057438 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.058371 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.059184 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.059719 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.070078 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.072156 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5565c8d7-xqnh6"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.083186 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.125524 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"141e094b-e8c8-4a61-b93c-8dec5ac89823","Type":"ContainerStarted","Data":"dfe2def501ea9b4238bae8b67e193723236fcc6fdce3113dd8be629b8c86ffc3"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.130514 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.130555 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerDied","Data":"69bd05a6495e5cb7cdf4e1d3db592b4ecb95799d07ea2642a2cb5673af58d135"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.130593 4984 scope.go:117] "RemoveContainer" containerID="5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.133638 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f6d8f475-hmb99" event={"ID":"a88ca399-adf6-4df4-8216-84de7603712b","Type":"ContainerStarted","Data":"eea68d359f72674ff8e2398b4f8bdd404622b91454d36c4c7f4f4f8ec7687657"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.133673 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f6d8f475-hmb99" event={"ID":"a88ca399-adf6-4df4-8216-84de7603712b","Type":"ContainerStarted","Data":"38aab2f34b741facacb894313a9e0ed1f396db233e4899a60dcd8a85041c0cd4"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.135024 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.135095 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.150029 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerDied","Data":"a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.150069 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.151954 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.773506978 podStartE2EDuration="18.151932828s" podCreationTimestamp="2026-01-30 10:32:03 +0000 UTC" firstStartedPulling="2026-01-30 10:32:04.490084584 +0000 UTC m=+1229.056388408" lastFinishedPulling="2026-01-30 10:32:19.868510434 +0000 UTC m=+1244.434814258" observedRunningTime="2026-01-30 10:32:21.146800329 +0000 UTC m=+1245.713104163" watchObservedRunningTime="2026-01-30 10:32:21.151932828 +0000 UTC m=+1245.718236642" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.188767 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5565c8d7-xqnh6" event={"ID":"0e442774-b2c1-418a-a5b2-edfd20f23c27","Type":"ContainerStarted","Data":"e7a2847c527c57f9285377dc7aef600061e22c381151ad6b1f951f89ceabf05c"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.221346 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.224048 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.231381 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"38638789db9d29c3ef911b6de9f957b454b4ebfc3c25c50089b77550538df8d3"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.231491 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.237320 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.238356 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.238398 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.238430 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.238666 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.241229 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.244601 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.248026 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.255213 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.260053 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.261534 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.266995 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.303372 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.322819 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.334284 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.341210 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.341427 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.342201 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.360646 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.360711 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.376469 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.378878 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.381525 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.384788 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.385629 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.388012 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.444268 4984 scope.go:117] "RemoveContainer" containerID="e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.446406 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.446575 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.461365 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548334 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548693 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548755 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548809 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548886 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548926 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.549011 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.549032 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.549810 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.586803 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.591066 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651107 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651208 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651256 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651303 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651321 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651369 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.652495 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.653610 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.656000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.656613 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.657619 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.658217 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.663508 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.681671 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.718867 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.743542 4984 scope.go:117] "RemoveContainer" containerID="8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.859807 4984 scope.go:117] "RemoveContainer" containerID="ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.885683 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.898224 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.912838 4984 scope.go:117] "RemoveContainer" containerID="bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.017835 4984 scope.go:117] "RemoveContainer" containerID="945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.059186 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.059269 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.059291 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.062728 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.062841 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.062968 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.078430 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl" (OuterVolumeSpecName: "kube-api-access-qjqvl") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "kube-api-access-qjqvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.111514 4984 scope.go:117] "RemoveContainer" containerID="0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.124561 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config" (OuterVolumeSpecName: "config") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.142293 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" path="/var/lib/kubelet/pods/04477670-b6dd-441f-909a-e6b56bf335d5/volumes" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.142987 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1238c32f-7644-4b33-8960-b97c64733162" path="/var/lib/kubelet/pods/1238c32f-7644-4b33-8960-b97c64733162/volumes" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.143989 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" path="/var/lib/kubelet/pods/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f/volumes" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.144901 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde56acd-942d-47dd-8417-8c92170502ce" path="/var/lib/kubelet/pods/cde56acd-942d-47dd-8417-8c92170502ce/volumes" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.172094 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.172385 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.217010 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.223471 4984 scope.go:117] "RemoveContainer" containerID="2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.271427 4984 generic.go:334] "Generic (PLEG): container finished" podID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerID="f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" exitCode=0 Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.272210 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.272199 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerDied","Data":"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.272287 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerDied","Data":"46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.294758 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5565c8d7-xqnh6" event={"ID":"0e442774-b2c1-418a-a5b2-edfd20f23c27","Type":"ContainerStarted","Data":"0701fcca5a69f41053b9d3d51870bbb16794337006362efe4e3e24f51cc6c3ec"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.296013 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qs8g9" event={"ID":"0b0be8dd-7b50-43e1-b223-8d5082a0c499","Type":"ContainerStarted","Data":"bbefe8ce4510fe7b158f92ce0dc00c30ab21b7eef1680bc50647aa0b28cbef5d"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.300312 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.300325 4984 scope.go:117] "RemoveContainer" containerID="f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.313725 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.316810 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f6d8f475-hmb99" event={"ID":"a88ca399-adf6-4df4-8216-84de7603712b","Type":"ContainerStarted","Data":"dbe38b0ed08be70a90d19d5ca66d6f7fb98d8d3cb6fe3e0187d277c5e27088c3"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.321611 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.321653 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.331769 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.342206 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.346708 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7vrp9" event={"ID":"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24","Type":"ContainerStarted","Data":"55c4ad08202caa288e8d7e5822ac5705c3135e2d86feaa79d8724c0c9dd0784d"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.346811 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7vrp9" event={"ID":"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24","Type":"ContainerStarted","Data":"1c5848b548d217213f89febcccd25bcde269e5553708abc872d8390746a63bbb"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.353862 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.371011 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.387291 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-77f6d8f475-hmb99" podStartSLOduration=10.387226594 podStartE2EDuration="10.387226594s" podCreationTimestamp="2026-01-30 10:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:22.359363152 +0000 UTC m=+1246.925666976" watchObservedRunningTime="2026-01-30 10:32:22.387226594 +0000 UTC m=+1246.953530418" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.403822 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.406643 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.406701 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.430598 4984 scope.go:117] "RemoveContainer" containerID="2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.445489 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-7vrp9" podStartSLOduration=2.445458905 podStartE2EDuration="2.445458905s" podCreationTimestamp="2026-01-30 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:22.378767606 +0000 UTC m=+1246.945071430" watchObservedRunningTime="2026-01-30 10:32:22.445458905 +0000 UTC m=+1247.011762719" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.685276 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.694537 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.698231 4984 scope.go:117] "RemoveContainer" containerID="f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" Jan 30 10:32:22 crc kubenswrapper[4984]: E0130 10:32:22.698665 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3\": container with ID starting with f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3 not found: ID does not exist" containerID="f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.698688 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3"} err="failed to get container status \"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3\": rpc error: code = NotFound desc = could not find container \"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3\": container with ID starting with f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3 not found: ID does not exist" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.698709 4984 scope.go:117] "RemoveContainer" containerID="2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d" Jan 30 10:32:22 crc kubenswrapper[4984]: E0130 10:32:22.698929 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d\": container with ID starting with 2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d not found: ID does not exist" containerID="2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.698947 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d"} err="failed to get container status \"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d\": rpc error: code = NotFound desc = could not find container \"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d\": container with ID starting with 2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d not found: ID does not exist" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.715654 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.727228 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.741135 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.805497 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.904015 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.416956 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" event={"ID":"3c78c96a-fba2-4de8-ab70-a16d31722959","Type":"ContainerStarted","Data":"b83b864f9215b1b901d3cd0dc5c544dfe0581fd330c80ad8350dc925278bda90"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.417201 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" event={"ID":"3c78c96a-fba2-4de8-ab70-a16d31722959","Type":"ContainerStarted","Data":"e81cf1f9e79d8b70d2e235029d59bacdc97bdc433a06d9e6b5e9ac828ea06bcf"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.420800 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"c0c4c822948d363ec832d915082d1e20bbbbcf4ed4ee70954c08c129b901a0b2"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.439766 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" podStartSLOduration=2.43974878 podStartE2EDuration="2.43974878s" podCreationTimestamp="2026-01-30 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:23.439571465 +0000 UTC m=+1248.005875289" watchObservedRunningTime="2026-01-30 10:32:23.43974878 +0000 UTC m=+1248.006052604" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.460610 4984 generic.go:334] "Generic (PLEG): container finished" podID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerID="f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b" exitCode=0 Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.460666 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerDied","Data":"f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.466495 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f837-account-create-update-tljj4" event={"ID":"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1","Type":"ContainerStarted","Data":"9cb5d7c891eea50ab9ba8545dcc17cab4c0d194d18b1326a2f9e72c749d5ea5f"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.466536 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f837-account-create-update-tljj4" event={"ID":"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1","Type":"ContainerStarted","Data":"575515e274160feb6211a43f20905827d4c5fe15a6ff35e9c803951a2e985f46"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.475475 4984 generic.go:334] "Generic (PLEG): container finished" podID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" containerID="55c4ad08202caa288e8d7e5822ac5705c3135e2d86feaa79d8724c0c9dd0784d" exitCode=0 Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.475833 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7vrp9" event={"ID":"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24","Type":"ContainerDied","Data":"55c4ad08202caa288e8d7e5822ac5705c3135e2d86feaa79d8724c0c9dd0784d"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.504223 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-f837-account-create-update-tljj4" podStartSLOduration=3.504203428 podStartE2EDuration="3.504203428s" podCreationTimestamp="2026-01-30 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:23.494313682 +0000 UTC m=+1248.060617506" watchObservedRunningTime="2026-01-30 10:32:23.504203428 +0000 UTC m=+1248.070507252" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.508965 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9847-account-create-update-p46tr" event={"ID":"61ce47a3-89a8-45f2-809e-9aaab0e718e2","Type":"ContainerStarted","Data":"73cbe196d056395ed3b9f37ad8135b6261f4b509ecb1bd1d8585347fdf36d081"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.509007 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9847-account-create-update-p46tr" event={"ID":"61ce47a3-89a8-45f2-809e-9aaab0e718e2","Type":"ContainerStarted","Data":"8ef80d8fdaf645dd8d2bdf1957a895428b93f7cc3fbc4ac309bedad93fb31c93"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.525754 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xjhtp" event={"ID":"24e68f06-af93-45d0-bf19-26469cac41f1","Type":"ContainerStarted","Data":"6eda3836ac458742c17eeba0173a28f9e62b42b7dbf4d4f433eb7525f26d90e6"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.525802 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xjhtp" event={"ID":"24e68f06-af93-45d0-bf19-26469cac41f1","Type":"ContainerStarted","Data":"fa33540a290efc7162b768b57b3dd915005ebb7fab7039dcf2d2739115fcb47c"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.528772 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ced7140-d346-43c7-9139-7f460af079e2","Type":"ContainerStarted","Data":"440b0f3902c394e1b1df1eaa1c9d17747a052729fcb06b831cd00ac93d764dfe"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.555829 4984 generic.go:334] "Generic (PLEG): container finished" podID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" containerID="f9b3187c82aff853cf22b0038f5d38d1cea29bfe3a85c99f377ce27a24d35342" exitCode=0 Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.555920 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qs8g9" event={"ID":"0b0be8dd-7b50-43e1-b223-8d5082a0c499","Type":"ContainerDied","Data":"f9b3187c82aff853cf22b0038f5d38d1cea29bfe3a85c99f377ce27a24d35342"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.557426 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-xjhtp" podStartSLOduration=3.557404853 podStartE2EDuration="3.557404853s" podCreationTimestamp="2026-01-30 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:23.548623876 +0000 UTC m=+1248.114927690" watchObservedRunningTime="2026-01-30 10:32:23.557404853 +0000 UTC m=+1248.123708677" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.564592 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5565c8d7-xqnh6" event={"ID":"0e442774-b2c1-418a-a5b2-edfd20f23c27","Type":"ContainerStarted","Data":"14fd681a6599f8e55f3b67ab760e8fc1db1dc3e02014cd45efc644931448963a"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.592054 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5565c8d7-xqnh6" podStartSLOduration=11.592034977 podStartE2EDuration="11.592034977s" podCreationTimestamp="2026-01-30 10:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:23.589136309 +0000 UTC m=+1248.155440133" watchObservedRunningTime="2026-01-30 10:32:23.592034977 +0000 UTC m=+1248.158338801" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.760388 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.975849 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976418 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976465 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976606 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976647 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976803 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976842 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976878 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.977981 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.978587 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs" (OuterVolumeSpecName: "logs") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.981921 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.988564 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2" (OuterVolumeSpecName: "kube-api-access-r9ql2") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "kube-api-access-r9ql2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.994135 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts" (OuterVolumeSpecName: "scripts") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.043478 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.058384 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data" (OuterVolumeSpecName: "config-data") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.070832 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081615 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081653 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081664 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081673 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081683 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081715 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081725 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081733 4984 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.103045 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" path="/var/lib/kubelet/pods/17f579b7-9f28-42f6-a7be-b7c562962f19/volumes" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.105352 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.183293 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.587021 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ced7140-d346-43c7-9139-7f460af079e2","Type":"ContainerStarted","Data":"226cde834bd61cb47219e223cc386de57f67df31fc05d2712714e74c56daeb00"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.591647 4984 generic.go:334] "Generic (PLEG): container finished" podID="3c78c96a-fba2-4de8-ab70-a16d31722959" containerID="b83b864f9215b1b901d3cd0dc5c544dfe0581fd330c80ad8350dc925278bda90" exitCode=0 Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.591833 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" event={"ID":"3c78c96a-fba2-4de8-ab70-a16d31722959","Type":"ContainerDied","Data":"b83b864f9215b1b901d3cd0dc5c544dfe0581fd330c80ad8350dc925278bda90"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.596774 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerDied","Data":"8a1e7d08bcb7a1c10909d3b6f8549348ca67f5b537c84b6ec8529217335158a6"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.596830 4984 scope.go:117] "RemoveContainer" containerID="f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.597482 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.598768 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.602010 4984 generic.go:334] "Generic (PLEG): container finished" podID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" containerID="9cb5d7c891eea50ab9ba8545dcc17cab4c0d194d18b1326a2f9e72c749d5ea5f" exitCode=0 Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.602562 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f837-account-create-update-tljj4" event={"ID":"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1","Type":"ContainerDied","Data":"9cb5d7c891eea50ab9ba8545dcc17cab4c0d194d18b1326a2f9e72c749d5ea5f"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.606379 4984 generic.go:334] "Generic (PLEG): container finished" podID="24e68f06-af93-45d0-bf19-26469cac41f1" containerID="6eda3836ac458742c17eeba0173a28f9e62b42b7dbf4d4f433eb7525f26d90e6" exitCode=0 Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.606571 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xjhtp" event={"ID":"24e68f06-af93-45d0-bf19-26469cac41f1","Type":"ContainerDied","Data":"6eda3836ac458742c17eeba0173a28f9e62b42b7dbf4d4f433eb7525f26d90e6"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.624034 4984 generic.go:334] "Generic (PLEG): container finished" podID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" containerID="73cbe196d056395ed3b9f37ad8135b6261f4b509ecb1bd1d8585347fdf36d081" exitCode=0 Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.624299 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9847-account-create-update-p46tr" event={"ID":"61ce47a3-89a8-45f2-809e-9aaab0e718e2","Type":"ContainerDied","Data":"73cbe196d056395ed3b9f37ad8135b6261f4b509ecb1bd1d8585347fdf36d081"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.625186 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.639891 4984 scope.go:117] "RemoveContainer" containerID="b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.660311 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.676678 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699436 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:24 crc kubenswrapper[4984]: E0130 10:32:24.699791 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-log" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699804 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-log" Jan 30 10:32:24 crc kubenswrapper[4984]: E0130 10:32:24.699824 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="dnsmasq-dns" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699830 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="dnsmasq-dns" Jan 30 10:32:24 crc kubenswrapper[4984]: E0130 10:32:24.699842 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="init" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699850 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="init" Jan 30 10:32:24 crc kubenswrapper[4984]: E0130 10:32:24.699865 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-httpd" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699871 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-httpd" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.700318 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="dnsmasq-dns" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.700338 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-log" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.700345 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-httpd" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.701315 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.705089 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.705205 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.716610 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.901723 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-logs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902012 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902037 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902125 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhc5\" (UniqueName: \"kubernetes.io/projected/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-kube-api-access-9lhc5\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902165 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902230 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902281 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902446 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004402 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-logs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004447 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004476 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004555 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhc5\" (UniqueName: \"kubernetes.io/projected/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-kube-api-access-9lhc5\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004585 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004672 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004694 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004832 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-logs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.005587 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.006390 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.018292 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.018295 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.021212 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhc5\" (UniqueName: \"kubernetes.io/projected/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-kube-api-access-9lhc5\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.025629 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.038440 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.067198 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.147814 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.249709 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.261094 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.308624 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") pod \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.308731 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") pod \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.309743 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" (UID: "bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.323618 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.333819 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk" (OuterVolumeSpecName: "kube-api-access-bmdwk") pod "bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" (UID: "bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24"). InnerVolumeSpecName "kube-api-access-bmdwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411062 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") pod \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411224 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") pod \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411337 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") pod \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411452 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") pod \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411840 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411836 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61ce47a3-89a8-45f2-809e-9aaab0e718e2" (UID: "61ce47a3-89a8-45f2-809e-9aaab0e718e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411859 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.412548 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b0be8dd-7b50-43e1-b223-8d5082a0c499" (UID: "0b0be8dd-7b50-43e1-b223-8d5082a0c499"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.416578 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn" (OuterVolumeSpecName: "kube-api-access-wxfdn") pod "61ce47a3-89a8-45f2-809e-9aaab0e718e2" (UID: "61ce47a3-89a8-45f2-809e-9aaab0e718e2"). InnerVolumeSpecName "kube-api-access-wxfdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.433670 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7" (OuterVolumeSpecName: "kube-api-access-7vtd7") pod "0b0be8dd-7b50-43e1-b223-8d5082a0c499" (UID: "0b0be8dd-7b50-43e1-b223-8d5082a0c499"). InnerVolumeSpecName "kube-api-access-7vtd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.514331 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.514659 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.514671 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.514681 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.645419 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qs8g9" event={"ID":"0b0be8dd-7b50-43e1-b223-8d5082a0c499","Type":"ContainerDied","Data":"bbefe8ce4510fe7b158f92ce0dc00c30ab21b7eef1680bc50647aa0b28cbef5d"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.645513 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbefe8ce4510fe7b158f92ce0dc00c30ab21b7eef1680bc50647aa0b28cbef5d" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.645458 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.661139 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.662458 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7vrp9" event={"ID":"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24","Type":"ContainerDied","Data":"1c5848b548d217213f89febcccd25bcde269e5553708abc872d8390746a63bbb"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.662485 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5848b548d217213f89febcccd25bcde269e5553708abc872d8390746a63bbb" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.662482 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.664027 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9847-account-create-update-p46tr" event={"ID":"61ce47a3-89a8-45f2-809e-9aaab0e718e2","Type":"ContainerDied","Data":"8ef80d8fdaf645dd8d2bdf1957a895428b93f7cc3fbc4ac309bedad93fb31c93"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.664049 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef80d8fdaf645dd8d2bdf1957a895428b93f7cc3fbc4ac309bedad93fb31c93" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.664093 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.670961 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ced7140-d346-43c7-9139-7f460af079e2","Type":"ContainerStarted","Data":"837a82394650be59869c86f7932775bd9f7396ce5d819163e507bb5bc612fb8a"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.697713 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.697693677 podStartE2EDuration="5.697693677s" podCreationTimestamp="2026-01-30 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:25.694948103 +0000 UTC m=+1250.261251927" watchObservedRunningTime="2026-01-30 10:32:25.697693677 +0000 UTC m=+1250.263997491" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.050009 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.146281 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" path="/var/lib/kubelet/pods/5b0932ca-60dc-45f3-96ed-e8a9c6040375/volumes" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.222739 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.355196 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.472179 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") pod \"3c78c96a-fba2-4de8-ab70-a16d31722959\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.472420 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") pod \"3c78c96a-fba2-4de8-ab70-a16d31722959\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.484405 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f" (OuterVolumeSpecName: "kube-api-access-lxt4f") pod "3c78c96a-fba2-4de8-ab70-a16d31722959" (UID: "3c78c96a-fba2-4de8-ab70-a16d31722959"). InnerVolumeSpecName "kube-api-access-lxt4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.485587 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c78c96a-fba2-4de8-ab70-a16d31722959" (UID: "3c78c96a-fba2-4de8-ab70-a16d31722959"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.577057 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.577117 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.703204 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xjhtp" event={"ID":"24e68f06-af93-45d0-bf19-26469cac41f1","Type":"ContainerDied","Data":"fa33540a290efc7162b768b57b3dd915005ebb7fab7039dcf2d2739115fcb47c"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.703268 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa33540a290efc7162b768b57b3dd915005ebb7fab7039dcf2d2739115fcb47c" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.713701 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96bc5a16-54a8-4008-98ea-3adb9b24e9fa","Type":"ContainerStarted","Data":"e157dae2537056a9aade17205b36a1b238748495f14a0b927d45cb2aae736603"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.727442 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" event={"ID":"3c78c96a-fba2-4de8-ab70-a16d31722959","Type":"ContainerDied","Data":"e81cf1f9e79d8b70d2e235029d59bacdc97bdc433a06d9e6b5e9ac828ea06bcf"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.727820 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e81cf1f9e79d8b70d2e235029d59bacdc97bdc433a06d9e6b5e9ac828ea06bcf" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.727476 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.742093 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.753009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f837-account-create-update-tljj4" event={"ID":"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1","Type":"ContainerDied","Data":"575515e274160feb6211a43f20905827d4c5fe15a6ff35e9c803951a2e985f46"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.753050 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="575515e274160feb6211a43f20905827d4c5fe15a6ff35e9c803951a2e985f46" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.770778 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.770963 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.895173 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") pod \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.895307 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") pod \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.895345 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") pod \"24e68f06-af93-45d0-bf19-26469cac41f1\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.895366 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") pod \"24e68f06-af93-45d0-bf19-26469cac41f1\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.896429 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24e68f06-af93-45d0-bf19-26469cac41f1" (UID: "24e68f06-af93-45d0-bf19-26469cac41f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.898627 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" (UID: "4173473e-6a7e-400a-bc3e-2a22d5ef6cd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.906606 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7" (OuterVolumeSpecName: "kube-api-access-wtcv7") pod "4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" (UID: "4173473e-6a7e-400a-bc3e-2a22d5ef6cd1"). InnerVolumeSpecName "kube-api-access-wtcv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.922471 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2" (OuterVolumeSpecName: "kube-api-access-zxxh2") pod "24e68f06-af93-45d0-bf19-26469cac41f1" (UID: "24e68f06-af93-45d0-bf19-26469cac41f1"). InnerVolumeSpecName "kube-api-access-zxxh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.996977 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.997018 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.997033 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.997046 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.440977 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.442693 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-log" containerID="cri-o://26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09" gracePeriod=30 Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.443317 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-httpd" containerID="cri-o://01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d" gracePeriod=30 Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.762824 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96bc5a16-54a8-4008-98ea-3adb9b24e9fa","Type":"ContainerStarted","Data":"f1e3598428243b6bbc619a1616f2f8a7f845042b9830ea6e8b0a96f9caed0944"} Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.768562 4984 generic.go:334] "Generic (PLEG): container finished" podID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerID="26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09" exitCode=143 Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.768655 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerDied","Data":"26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09"} Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.768709 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.768787 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.845568 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.853132 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.199965 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.405918 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.791401 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96bc5a16-54a8-4008-98ea-3adb9b24e9fa","Type":"ContainerStarted","Data":"679a0e2026d23d1b3baddab54bccd4fb9d36b4a871c50da9c074d8bbf87cb23c"} Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.796517 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-central-agent" containerID="cri-o://5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6" gracePeriod=30 Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.796908 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0"} Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.797542 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.797621 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="sg-core" containerID="cri-o://e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f" gracePeriod=30 Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.797652 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-notification-agent" containerID="cri-o://3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af" gracePeriod=30 Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.797708 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" containerID="cri-o://295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0" gracePeriod=30 Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.818516 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.818497805 podStartE2EDuration="4.818497805s" podCreationTimestamp="2026-01-30 10:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:28.808262799 +0000 UTC m=+1253.374566623" watchObservedRunningTime="2026-01-30 10:32:28.818497805 +0000 UTC m=+1253.384801639" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.837024 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.220540147 podStartE2EDuration="7.837009024s" podCreationTimestamp="2026-01-30 10:32:21 +0000 UTC" firstStartedPulling="2026-01-30 10:32:22.877985749 +0000 UTC m=+1247.444289573" lastFinishedPulling="2026-01-30 10:32:28.494454626 +0000 UTC m=+1253.060758450" observedRunningTime="2026-01-30 10:32:28.833010177 +0000 UTC m=+1253.399314001" watchObservedRunningTime="2026-01-30 10:32:28.837009024 +0000 UTC m=+1253.403312848" Jan 30 10:32:29 crc kubenswrapper[4984]: I0130 10:32:29.815084 4984 generic.go:334] "Generic (PLEG): container finished" podID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerID="e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f" exitCode=2 Jan 30 10:32:29 crc kubenswrapper[4984]: I0130 10:32:29.815122 4984 generic.go:334] "Generic (PLEG): container finished" podID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerID="3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af" exitCode=0 Jan 30 10:32:29 crc kubenswrapper[4984]: I0130 10:32:29.815123 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f"} Jan 30 10:32:29 crc kubenswrapper[4984]: I0130 10:32:29.815190 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af"} Jan 30 10:32:30 crc kubenswrapper[4984]: I0130 10:32:30.832841 4984 generic.go:334] "Generic (PLEG): container finished" podID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerID="5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6" exitCode=0 Jan 30 10:32:30 crc kubenswrapper[4984]: I0130 10:32:30.833240 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6"} Jan 30 10:32:30 crc kubenswrapper[4984]: I0130 10:32:30.840579 4984 generic.go:334] "Generic (PLEG): container finished" podID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerID="01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d" exitCode=0 Jan 30 10:32:30 crc kubenswrapper[4984]: I0130 10:32:30.840654 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerDied","Data":"01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d"} Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.160683 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288678 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288776 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288813 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288856 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288923 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288999 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.289104 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.289149 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.290448 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs" (OuterVolumeSpecName: "logs") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.292078 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.297195 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts" (OuterVolumeSpecName: "scripts") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.298464 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8" (OuterVolumeSpecName: "kube-api-access-7ggp8") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "kube-api-access-7ggp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.301393 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.346510 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.366895 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data" (OuterVolumeSpecName: "config-data") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.375307 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392031 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392063 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392074 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392088 4984 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392143 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392153 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392162 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392171 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.419993 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428408 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428767 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428783 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428800 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428805 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428819 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c78c96a-fba2-4de8-ab70-a16d31722959" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428826 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c78c96a-fba2-4de8-ab70-a16d31722959" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428843 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428849 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428862 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e68f06-af93-45d0-bf19-26469cac41f1" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428867 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e68f06-af93-45d0-bf19-26469cac41f1" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428879 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428884 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428893 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-log" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428898 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-log" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428907 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-httpd" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428913 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-httpd" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429087 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e68f06-af93-45d0-bf19-26469cac41f1" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429095 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-log" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429105 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c78c96a-fba2-4de8-ab70-a16d31722959" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429117 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429124 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429136 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-httpd" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429145 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429155 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429705 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.433905 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.434139 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lllgq" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.434325 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.441415 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.471761 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.494093 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.595354 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.595527 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.595588 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.595651 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.697507 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.697620 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.697654 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.697689 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.703427 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.703482 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.705973 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.717565 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.746356 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.861393 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerDied","Data":"81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b"} Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.861449 4984 scope.go:117] "RemoveContainer" containerID="01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.861522 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.884643 4984 scope.go:117] "RemoveContainer" containerID="26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.927987 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.948145 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.969955 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.971488 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.973777 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.974038 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.986366 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.101809 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" path="/var/lib/kubelet/pods/e5a91d1d-433e-415f-83f8-04185f2bae8e/volumes" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.105877 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.105916 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-logs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.105935 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-scripts\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106330 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-config-data\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106419 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106448 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106505 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w556j\" (UniqueName: \"kubernetes.io/projected/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-kube-api-access-w556j\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106549 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.209809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w556j\" (UniqueName: \"kubernetes.io/projected/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-kube-api-access-w556j\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.210187 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.210757 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211117 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211158 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-logs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211182 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-scripts\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211341 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-config-data\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211401 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211429 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.212021 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-logs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.212327 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.216851 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.218205 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.218214 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-config-data\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.221041 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-scripts\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.276743 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.277585 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w556j\" (UniqueName: \"kubernetes.io/projected/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-kube-api-access-w556j\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.302355 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.313891 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:32:32 crc kubenswrapper[4984]: W0130 10:32:32.321462 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeaa8458_e32e_4a6f_9e67_3e394d9daa32.slice/crio-c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c WatchSource:0}: Error finding container c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c: Status 404 returned error can't find the container with id c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.775144 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:32 crc kubenswrapper[4984]: W0130 10:32:32.786501 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fa01bff_d884_4b1f_b0c2_8c0fbd957a30.slice/crio-4ffe63863bf682b823a099ff2f7223886ea8d640d8e742f9da30a152a2b58a86 WatchSource:0}: Error finding container 4ffe63863bf682b823a099ff2f7223886ea8d640d8e742f9da30a152a2b58a86: Status 404 returned error can't find the container with id 4ffe63863bf682b823a099ff2f7223886ea8d640d8e742f9da30a152a2b58a86 Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.874187 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30","Type":"ContainerStarted","Data":"4ffe63863bf682b823a099ff2f7223886ea8d640d8e742f9da30a152a2b58a86"} Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.876027 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" event={"ID":"deaa8458-e32e-4a6f-9e67-3e394d9daa32","Type":"ContainerStarted","Data":"c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c"} Jan 30 10:32:33 crc kubenswrapper[4984]: I0130 10:32:33.898751 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30","Type":"ContainerStarted","Data":"36ae8e98f3586be7682cdfe6e2f3a1fabe4f2cc8e732cf8315dd0e85dce69c2c"} Jan 30 10:32:34 crc kubenswrapper[4984]: I0130 10:32:34.912892 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30","Type":"ContainerStarted","Data":"7a6518c74d129770f26cd6d11dad7296f644bb1ce8f14294e6e0067d40d81472"} Jan 30 10:32:34 crc kubenswrapper[4984]: I0130 10:32:34.950978 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.950957537 podStartE2EDuration="3.950957537s" podCreationTimestamp="2026-01-30 10:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:34.933584349 +0000 UTC m=+1259.499888213" watchObservedRunningTime="2026-01-30 10:32:34.950957537 +0000 UTC m=+1259.517261371" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.324786 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.325068 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.372719 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.390559 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.922883 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.922928 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:37 crc kubenswrapper[4984]: I0130 10:32:37.790097 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:37 crc kubenswrapper[4984]: I0130 10:32:37.795200 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:40 crc kubenswrapper[4984]: I0130 10:32:40.765052 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:40 crc kubenswrapper[4984]: I0130 10:32:40.986004 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" event={"ID":"deaa8458-e32e-4a6f-9e67-3e394d9daa32","Type":"ContainerStarted","Data":"f6d3ea39520182e990fd0bc6891d62649eeb90ca61d761fe228472651906c15d"} Jan 30 10:32:41 crc kubenswrapper[4984]: I0130 10:32:41.015889 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" podStartSLOduration=2.405049383 podStartE2EDuration="10.015866437s" podCreationTimestamp="2026-01-30 10:32:31 +0000 UTC" firstStartedPulling="2026-01-30 10:32:32.329696182 +0000 UTC m=+1256.896000006" lastFinishedPulling="2026-01-30 10:32:39.940513236 +0000 UTC m=+1264.506817060" observedRunningTime="2026-01-30 10:32:41.005665882 +0000 UTC m=+1265.571969706" watchObservedRunningTime="2026-01-30 10:32:41.015866437 +0000 UTC m=+1265.582170271" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.302923 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.304067 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.338092 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.349426 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.507503 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.589444 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.590782 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-599cd9b588-9ll76" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-api" containerID="cri-o://5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" gracePeriod=30 Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.590922 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-599cd9b588-9ll76" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-httpd" containerID="cri-o://e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" gracePeriod=30 Jan 30 10:32:43 crc kubenswrapper[4984]: I0130 10:32:43.003170 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 10:32:43 crc kubenswrapper[4984]: I0130 10:32:43.003203 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 10:32:44 crc kubenswrapper[4984]: I0130 10:32:44.918692 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 10:32:45 crc kubenswrapper[4984]: I0130 10:32:45.017828 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 10:32:45 crc kubenswrapper[4984]: I0130 10:32:45.029855 4984 generic.go:334] "Generic (PLEG): container finished" podID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerID="e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" exitCode=0 Jan 30 10:32:45 crc kubenswrapper[4984]: I0130 10:32:45.029928 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerDied","Data":"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08"} Jan 30 10:32:51 crc kubenswrapper[4984]: I0130 10:32:51.723811 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.694870 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.825961 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.826143 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.826237 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.826525 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.826591 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.833485 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.833734 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd" (OuterVolumeSpecName: "kube-api-access-z6tgd") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "kube-api-access-z6tgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.883595 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.901490 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config" (OuterVolumeSpecName: "config") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.911988 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929311 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929351 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929368 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929380 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929391 4984 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.100963 4984 generic.go:334] "Generic (PLEG): container finished" podID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerID="5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" exitCode=0 Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.101015 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerDied","Data":"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a"} Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.101046 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerDied","Data":"726ba7faaff55c103e2271e253ff0f17293623696cf0c95eaae899332787dccc"} Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.101045 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.101066 4984 scope.go:117] "RemoveContainer" containerID="e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.131743 4984 scope.go:117] "RemoveContainer" containerID="5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.156215 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.158920 4984 scope.go:117] "RemoveContainer" containerID="e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" Jan 30 10:32:53 crc kubenswrapper[4984]: E0130 10:32:53.159597 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08\": container with ID starting with e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08 not found: ID does not exist" containerID="e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.159651 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08"} err="failed to get container status \"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08\": rpc error: code = NotFound desc = could not find container \"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08\": container with ID starting with e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08 not found: ID does not exist" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.159683 4984 scope.go:117] "RemoveContainer" containerID="5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" Jan 30 10:32:53 crc kubenswrapper[4984]: E0130 10:32:53.160214 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a\": container with ID starting with 5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a not found: ID does not exist" containerID="5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.160267 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a"} err="failed to get container status \"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a\": rpc error: code = NotFound desc = could not find container \"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a\": container with ID starting with 5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a not found: ID does not exist" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.165060 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:54 crc kubenswrapper[4984]: I0130 10:32:54.100544 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" path="/var/lib/kubelet/pods/4c1c7220-21e6-477f-aa26-eb230da7178f/volumes" Jan 30 10:32:54 crc kubenswrapper[4984]: I0130 10:32:54.110431 4984 generic.go:334] "Generic (PLEG): container finished" podID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" containerID="f6d3ea39520182e990fd0bc6891d62649eeb90ca61d761fe228472651906c15d" exitCode=0 Jan 30 10:32:54 crc kubenswrapper[4984]: I0130 10:32:54.110513 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" event={"ID":"deaa8458-e32e-4a6f-9e67-3e394d9daa32","Type":"ContainerDied","Data":"f6d3ea39520182e990fd0bc6891d62649eeb90ca61d761fe228472651906c15d"} Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.482703 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.574690 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") pod \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.574783 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") pod \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.574958 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") pod \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.575010 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") pod \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.585000 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f" (OuterVolumeSpecName: "kube-api-access-7wr4f") pod "deaa8458-e32e-4a6f-9e67-3e394d9daa32" (UID: "deaa8458-e32e-4a6f-9e67-3e394d9daa32"). InnerVolumeSpecName "kube-api-access-7wr4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.604355 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts" (OuterVolumeSpecName: "scripts") pod "deaa8458-e32e-4a6f-9e67-3e394d9daa32" (UID: "deaa8458-e32e-4a6f-9e67-3e394d9daa32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.613363 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data" (OuterVolumeSpecName: "config-data") pod "deaa8458-e32e-4a6f-9e67-3e394d9daa32" (UID: "deaa8458-e32e-4a6f-9e67-3e394d9daa32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.616403 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deaa8458-e32e-4a6f-9e67-3e394d9daa32" (UID: "deaa8458-e32e-4a6f-9e67-3e394d9daa32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.678274 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.678329 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.678346 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.678360 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.141781 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" event={"ID":"deaa8458-e32e-4a6f-9e67-3e394d9daa32","Type":"ContainerDied","Data":"c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c"} Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.141830 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.141862 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.224942 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 10:32:56 crc kubenswrapper[4984]: E0130 10:32:56.225771 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-httpd" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.225794 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-httpd" Jan 30 10:32:56 crc kubenswrapper[4984]: E0130 10:32:56.225809 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" containerName="nova-cell0-conductor-db-sync" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.225819 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" containerName="nova-cell0-conductor-db-sync" Jan 30 10:32:56 crc kubenswrapper[4984]: E0130 10:32:56.225844 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-api" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.225852 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-api" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.226078 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" containerName="nova-cell0-conductor-db-sync" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.226103 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-api" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.226120 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-httpd" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.226831 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.229843 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.231692 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lllgq" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.238040 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.289108 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.289629 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.289757 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28r6\" (UniqueName: \"kubernetes.io/projected/4d02a683-2231-4e04-89bb-748baf8bc65d-kube-api-access-q28r6\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.391526 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.391678 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.391732 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q28r6\" (UniqueName: \"kubernetes.io/projected/4d02a683-2231-4e04-89bb-748baf8bc65d-kube-api-access-q28r6\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.397137 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.397978 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.408838 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q28r6\" (UniqueName: \"kubernetes.io/projected/4d02a683-2231-4e04-89bb-748baf8bc65d-kube-api-access-q28r6\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.553987 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:57 crc kubenswrapper[4984]: I0130 10:32:57.015287 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 10:32:57 crc kubenswrapper[4984]: I0130 10:32:57.151660 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4d02a683-2231-4e04-89bb-748baf8bc65d","Type":"ContainerStarted","Data":"27dce1ad12cff1aa5d095db69bf9b05ed524eefcb192792ee9726010a9ea29bf"} Jan 30 10:32:58 crc kubenswrapper[4984]: I0130 10:32:58.166224 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4d02a683-2231-4e04-89bb-748baf8bc65d","Type":"ContainerStarted","Data":"266948e746c3c855632dfb910262da4a921ac76a4389b29a77dc6bdd2fda4db3"} Jan 30 10:32:58 crc kubenswrapper[4984]: I0130 10:32:58.166587 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:58 crc kubenswrapper[4984]: I0130 10:32:58.191400 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.191382313 podStartE2EDuration="2.191382313s" podCreationTimestamp="2026-01-30 10:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:58.183612093 +0000 UTC m=+1282.749915917" watchObservedRunningTime="2026-01-30 10:32:58.191382313 +0000 UTC m=+1282.757686137" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.178428 4984 generic.go:334] "Generic (PLEG): container finished" podID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerID="295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0" exitCode=137 Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.178498 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0"} Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.179131 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"c0c4c822948d363ec832d915082d1e20bbbbcf4ed4ee70954c08c129b901a0b2"} Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.179152 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c4c822948d363ec832d915082d1e20bbbbcf4ed4ee70954c08c129b901a0b2" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.254568 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351168 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351210 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351242 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351317 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351348 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351387 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351470 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.352391 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.352640 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.358341 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2" (OuterVolumeSpecName: "kube-api-access-fwkv2") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "kube-api-access-fwkv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.358492 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts" (OuterVolumeSpecName: "scripts") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.376279 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.426736 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.439971 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data" (OuterVolumeSpecName: "config-data") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453584 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453612 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453626 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453638 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453648 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453658 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453668 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.188627 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.216386 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.226521 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240335 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:00 crc kubenswrapper[4984]: E0130 10:33:00.240772 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-central-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240790 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-central-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: E0130 10:33:00.240819 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-notification-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240826 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-notification-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: E0130 10:33:00.240847 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240855 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" Jan 30 10:33:00 crc kubenswrapper[4984]: E0130 10:33:00.240876 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="sg-core" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240883 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="sg-core" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.241065 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.241085 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-central-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.241105 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="sg-core" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.241120 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-notification-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.242915 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.244829 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.244875 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.260037 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.371593 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.371665 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.371886 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.372009 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.372079 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.372225 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.372307 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475033 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475101 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475127 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475202 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475263 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475323 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475364 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.476021 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.476398 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.482114 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.483056 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.483645 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.486795 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.515298 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.560223 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:01 crc kubenswrapper[4984]: I0130 10:33:01.041415 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:01 crc kubenswrapper[4984]: W0130 10:33:01.050500 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c10d6ea_d3d3_49cf_8185_0b4946edc4be.slice/crio-6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4 WatchSource:0}: Error finding container 6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4: Status 404 returned error can't find the container with id 6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4 Jan 30 10:33:01 crc kubenswrapper[4984]: I0130 10:33:01.054950 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:33:01 crc kubenswrapper[4984]: I0130 10:33:01.202569 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4"} Jan 30 10:33:02 crc kubenswrapper[4984]: I0130 10:33:02.102923 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" path="/var/lib/kubelet/pods/15f1513a-b6e2-45fc-812c-a5dcb490d5bd/volumes" Jan 30 10:33:02 crc kubenswrapper[4984]: I0130 10:33:02.210941 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce"} Jan 30 10:33:03 crc kubenswrapper[4984]: I0130 10:33:03.225427 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb"} Jan 30 10:33:06 crc kubenswrapper[4984]: I0130 10:33:06.580857 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.111611 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.113043 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.116790 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.117037 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.131728 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.201439 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.203540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.204120 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.204643 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.275318 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f"} Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.286612 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.293315 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.296867 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306188 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306236 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306343 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306386 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306555 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.313085 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.314735 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.332359 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.337231 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.408390 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.408566 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.408633 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.408657 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.437662 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.456385 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.457590 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.460688 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.503472 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.504985 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.521642 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523170 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523218 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523271 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523288 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523306 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523387 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.525897 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.540743 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.540936 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.549400 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.552880 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.617320 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.618587 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.620410 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.625318 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.625862 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.625951 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.626106 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.626219 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.626287 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.626440 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.638996 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.640663 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.645597 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.653323 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.658747 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.699733 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.722395 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.724527 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.728908 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.728956 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729013 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729078 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729116 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729150 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729637 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.731951 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.736520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.769541 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.777095 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.819532 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.830958 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832192 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832294 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832372 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832509 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832545 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.833555 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.833703 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.833762 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.838094 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.853269 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.855156 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.936811 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.940997 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.941338 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.941500 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.941733 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.941858 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.942372 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.943154 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.945699 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.945713 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.945768 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.958905 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.984186 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.007406 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.086624 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.112237 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.225463 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.293857 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hphht" event={"ID":"e9c9c509-275d-47bc-81f8-755bab6b2be8","Type":"ContainerStarted","Data":"e5e33b1fca148910fadb2320c0204e6859f314fe51e88bec9fdfc835c9853b27"} Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.298550 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d02652b8-5031-4209-b2e7-228742c7a308","Type":"ContainerStarted","Data":"af5529722b6cfea0d21c483516240a6c61e08bb8fa1bfc0ece4e5fb90209726f"} Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.341349 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.528886 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.530422 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.534204 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.534409 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.541043 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.556642 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.559402 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.559523 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.559760 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.607260 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.662949 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.663074 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.663102 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.663324 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.674086 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.675928 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.677519 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.678668 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.690052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: W0130 10:33:08.699327 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbcb3e98_2063_421d_a76f_bca749fa2824.slice/crio-075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d WatchSource:0}: Error finding container 075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d: Status 404 returned error can't find the container with id 075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.703300 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.856564 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.315226 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hphht" event={"ID":"e9c9c509-275d-47bc-81f8-755bab6b2be8","Type":"ContainerStarted","Data":"d94d94153d595e4b9ce76157accc6d01c2cb9f1b145e151fe1e75fe78e9c2a57"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.322202 4984 generic.go:334] "Generic (PLEG): container finished" podID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerID="aa1f69e5832486947c309113f3fb6a6493f2b91d3f8828fd6cfe76af73d8b0a8" exitCode=0 Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.322387 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerDied","Data":"aa1f69e5832486947c309113f3fb6a6493f2b91d3f8828fd6cfe76af73d8b0a8"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.322419 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerStarted","Data":"d321da41062e4b6042ed3a9bb6a7b9877923a06f6b266f1b243b188fd84ea8bc"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.324092 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerStarted","Data":"ecd4903f8d6e5a12e35abf7e02e0342af660b1313797fb073846a8e0fffb44cd"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.328390 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.328805 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.331062 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbcb3e98-2063-421d-a76f-bca749fa2824","Type":"ContainerStarted","Data":"075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.333164 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerStarted","Data":"e2e94b843b76f25cb381e78e8277fe2519e55455a14ac71ad0fc880044917b0f"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.339686 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hphht" podStartSLOduration=2.339666321 podStartE2EDuration="2.339666321s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:09.334760489 +0000 UTC m=+1293.901064313" watchObservedRunningTime="2026-01-30 10:33:09.339666321 +0000 UTC m=+1293.905970145" Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.370062 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.134710124 podStartE2EDuration="9.37003541s" podCreationTimestamp="2026-01-30 10:33:00 +0000 UTC" firstStartedPulling="2026-01-30 10:33:01.054490121 +0000 UTC m=+1285.620793985" lastFinishedPulling="2026-01-30 10:33:08.289815437 +0000 UTC m=+1292.856119271" observedRunningTime="2026-01-30 10:33:09.361521941 +0000 UTC m=+1293.927825775" watchObservedRunningTime="2026-01-30 10:33:09.37003541 +0000 UTC m=+1293.936339234" Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.399609 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.351015 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerStarted","Data":"dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f"} Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.352768 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.358604 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" event={"ID":"6148a148-07c4-4584-95ff-10d5e5147954","Type":"ContainerStarted","Data":"01f24060ed65c8e2bd6475cb81b1d352cdc388008c24396c142500998835d3df"} Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.358641 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" event={"ID":"6148a148-07c4-4584-95ff-10d5e5147954","Type":"ContainerStarted","Data":"7460d26f16ced1d1e6a9ddf520dce3ce58c888acd0fc9117f073f9d56ecfe696"} Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.379064 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" podStartSLOduration=3.379042303 podStartE2EDuration="3.379042303s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:10.366548076 +0000 UTC m=+1294.932851900" watchObservedRunningTime="2026-01-30 10:33:10.379042303 +0000 UTC m=+1294.945346127" Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.390739 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" podStartSLOduration=2.390721188 podStartE2EDuration="2.390721188s" podCreationTimestamp="2026-01-30 10:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:10.381732716 +0000 UTC m=+1294.948036540" watchObservedRunningTime="2026-01-30 10:33:10.390721188 +0000 UTC m=+1294.957025012" Jan 30 10:33:11 crc kubenswrapper[4984]: I0130 10:33:11.028933 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:11 crc kubenswrapper[4984]: I0130 10:33:11.037315 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.408836 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d02652b8-5031-4209-b2e7-228742c7a308","Type":"ContainerStarted","Data":"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.408895 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d02652b8-5031-4209-b2e7-228742c7a308" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" gracePeriod=30 Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.421041 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerStarted","Data":"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.421084 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerStarted","Data":"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.422646 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbcb3e98-2063-421d-a76f-bca749fa2824","Type":"ContainerStarted","Data":"955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.430220 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerStarted","Data":"9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.430343 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerStarted","Data":"a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.430471 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-log" containerID="cri-o://a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b" gracePeriod=30 Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.430586 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-metadata" containerID="cri-o://9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179" gracePeriod=30 Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.445485 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.39593471 podStartE2EDuration="7.445460965s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="2026-01-30 10:33:08.249708686 +0000 UTC m=+1292.816012510" lastFinishedPulling="2026-01-30 10:33:13.299234931 +0000 UTC m=+1297.865538765" observedRunningTime="2026-01-30 10:33:14.428083656 +0000 UTC m=+1298.994387480" watchObservedRunningTime="2026-01-30 10:33:14.445460965 +0000 UTC m=+1299.011764789" Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.449684 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.851120446 podStartE2EDuration="7.449664598s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="2026-01-30 10:33:08.700688919 +0000 UTC m=+1293.266992743" lastFinishedPulling="2026-01-30 10:33:13.299233071 +0000 UTC m=+1297.865536895" observedRunningTime="2026-01-30 10:33:14.447178321 +0000 UTC m=+1299.013482145" watchObservedRunningTime="2026-01-30 10:33:14.449664598 +0000 UTC m=+1299.015968422" Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.495042 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.529207184 podStartE2EDuration="7.494988751s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="2026-01-30 10:33:08.360810532 +0000 UTC m=+1292.927114356" lastFinishedPulling="2026-01-30 10:33:13.326592099 +0000 UTC m=+1297.892895923" observedRunningTime="2026-01-30 10:33:14.467107679 +0000 UTC m=+1299.033411513" watchObservedRunningTime="2026-01-30 10:33:14.494988751 +0000 UTC m=+1299.061292585" Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.513093 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.843283785 podStartE2EDuration="7.513075368s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="2026-01-30 10:33:08.62768867 +0000 UTC m=+1293.193992504" lastFinishedPulling="2026-01-30 10:33:13.297480263 +0000 UTC m=+1297.863784087" observedRunningTime="2026-01-30 10:33:14.491814315 +0000 UTC m=+1299.058118129" watchObservedRunningTime="2026-01-30 10:33:14.513075368 +0000 UTC m=+1299.079379192" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.441836 4984 generic.go:334] "Generic (PLEG): container finished" podID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerID="9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179" exitCode=0 Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.442122 4984 generic.go:334] "Generic (PLEG): container finished" podID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerID="a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b" exitCode=143 Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.441920 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerDied","Data":"9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179"} Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.442180 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerDied","Data":"a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b"} Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.567438 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.622273 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") pod \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.622478 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") pod \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.622540 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") pod \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.622606 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") pod \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.623213 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs" (OuterVolumeSpecName: "logs") pod "6a7be6e3-d6f3-4aef-b870-985a4e3a400f" (UID: "6a7be6e3-d6f3-4aef-b870-985a4e3a400f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.640784 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm" (OuterVolumeSpecName: "kube-api-access-wjvmm") pod "6a7be6e3-d6f3-4aef-b870-985a4e3a400f" (UID: "6a7be6e3-d6f3-4aef-b870-985a4e3a400f"). InnerVolumeSpecName "kube-api-access-wjvmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.648579 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a7be6e3-d6f3-4aef-b870-985a4e3a400f" (UID: "6a7be6e3-d6f3-4aef-b870-985a4e3a400f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.654486 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data" (OuterVolumeSpecName: "config-data") pod "6a7be6e3-d6f3-4aef-b870-985a4e3a400f" (UID: "6a7be6e3-d6f3-4aef-b870-985a4e3a400f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.724978 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.725023 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.725039 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.725054 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:16 crc kubenswrapper[4984]: E0130 10:33:16.218381 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a7be6e3_d6f3_4aef_b870_985a4e3a400f.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.455622 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerDied","Data":"e2e94b843b76f25cb381e78e8277fe2519e55455a14ac71ad0fc880044917b0f"} Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.455678 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.455692 4984 scope.go:117] "RemoveContainer" containerID="9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.491410 4984 scope.go:117] "RemoveContainer" containerID="a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.499462 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.518649 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.527328 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:16 crc kubenswrapper[4984]: E0130 10:33:16.527770 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-metadata" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.527789 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-metadata" Jan 30 10:33:16 crc kubenswrapper[4984]: E0130 10:33:16.527801 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-log" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.527807 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-log" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.527997 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-log" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.528012 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-metadata" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.529093 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.531410 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.531434 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.536056 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638193 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638293 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638332 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638493 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.739940 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740003 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740049 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740093 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740201 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740678 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.749927 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.751127 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.764579 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.764609 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.872359 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.168191 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.466102 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerStarted","Data":"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264"} Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.466565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerStarted","Data":"4144ee9c2a28e71f131e6c10f223d1b110888ba0da851c6cf5c4df3303551826"} Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.469540 4984 generic.go:334] "Generic (PLEG): container finished" podID="e9c9c509-275d-47bc-81f8-755bab6b2be8" containerID="d94d94153d595e4b9ce76157accc6d01c2cb9f1b145e151fe1e75fe78e9c2a57" exitCode=0 Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.469580 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hphht" event={"ID":"e9c9c509-275d-47bc-81f8-755bab6b2be8","Type":"ContainerDied","Data":"d94d94153d595e4b9ce76157accc6d01c2cb9f1b145e151fe1e75fe78e9c2a57"} Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.660043 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.733524 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.733585 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.009999 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.010382 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.041011 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.088427 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.104447 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" path="/var/lib/kubelet/pods/6a7be6e3-d6f3-4aef-b870-985a4e3a400f/volumes" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.175274 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.176801 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="dnsmasq-dns" containerID="cri-o://98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0" gracePeriod=10 Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.504421 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerStarted","Data":"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a"} Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.506763 4984 generic.go:334] "Generic (PLEG): container finished" podID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerID="98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0" exitCode=0 Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.506844 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerDied","Data":"98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0"} Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.528416 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.528390482 podStartE2EDuration="2.528390482s" podCreationTimestamp="2026-01-30 10:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:18.527029725 +0000 UTC m=+1303.093333539" watchObservedRunningTime="2026-01-30 10:33:18.528390482 +0000 UTC m=+1303.094694306" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.566131 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.754398 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.779742 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.779898 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.779938 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.780012 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.780050 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.780075 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.793146 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz" (OuterVolumeSpecName: "kube-api-access-ttccz") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "kube-api-access-ttccz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.824387 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.824657 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.853459 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.868153 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config" (OuterVolumeSpecName: "config") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.870392 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.878640 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.881951 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.881987 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.882001 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.882017 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.882030 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.920023 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.984239 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.996542 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.085959 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") pod \"e9c9c509-275d-47bc-81f8-755bab6b2be8\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.086295 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") pod \"e9c9c509-275d-47bc-81f8-755bab6b2be8\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.086350 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") pod \"e9c9c509-275d-47bc-81f8-755bab6b2be8\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.086428 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") pod \"e9c9c509-275d-47bc-81f8-755bab6b2be8\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.098042 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts" (OuterVolumeSpecName: "scripts") pod "e9c9c509-275d-47bc-81f8-755bab6b2be8" (UID: "e9c9c509-275d-47bc-81f8-755bab6b2be8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.113163 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5" (OuterVolumeSpecName: "kube-api-access-bjqc5") pod "e9c9c509-275d-47bc-81f8-755bab6b2be8" (UID: "e9c9c509-275d-47bc-81f8-755bab6b2be8"). InnerVolumeSpecName "kube-api-access-bjqc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.127966 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9c9c509-275d-47bc-81f8-755bab6b2be8" (UID: "e9c9c509-275d-47bc-81f8-755bab6b2be8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.136451 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data" (OuterVolumeSpecName: "config-data") pod "e9c9c509-275d-47bc-81f8-755bab6b2be8" (UID: "e9c9c509-275d-47bc-81f8-755bab6b2be8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.188971 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.189533 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.189665 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.189683 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.518467 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.518460 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerDied","Data":"944e20dd436d6475eadb44cdbbb965933e9f99f6729e1968115646dd8b334bc1"} Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.518726 4984 scope.go:117] "RemoveContainer" containerID="98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.528835 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.529872 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hphht" event={"ID":"e9c9c509-275d-47bc-81f8-755bab6b2be8","Type":"ContainerDied","Data":"e5e33b1fca148910fadb2320c0204e6859f314fe51e88bec9fdfc835c9853b27"} Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.529970 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5e33b1fca148910fadb2320c0204e6859f314fe51e88bec9fdfc835c9853b27" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.559295 4984 scope.go:117] "RemoveContainer" containerID="56bcdf99f2e8704de387e7830f17377f1640401317904a569f1e4bd023c74298" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.571567 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.580619 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.601193 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.604737 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" containerID="cri-o://5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" gracePeriod=30 Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.605474 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" containerID="cri-o://5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" gracePeriod=30 Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.619708 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.641763 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.107603 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" path="/var/lib/kubelet/pods/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50/volumes" Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.539816 4984 generic.go:334] "Generic (PLEG): container finished" podID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerID="5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" exitCode=143 Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.540117 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-log" containerID="cri-o://ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" gracePeriod=30 Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.540224 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerDied","Data":"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a"} Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.540440 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerName="nova-scheduler-scheduler" containerID="cri-o://955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b" gracePeriod=30 Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.541035 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-metadata" containerID="cri-o://94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" gracePeriod=30 Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.162795 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.237825 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.237946 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.238031 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.238076 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.238208 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.239116 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs" (OuterVolumeSpecName: "logs") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.244861 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7" (OuterVolumeSpecName: "kube-api-access-w5fg7") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "kube-api-access-w5fg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.265293 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.272561 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data" (OuterVolumeSpecName: "config-data") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.290577 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.340653 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.340861 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.340965 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.341043 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.341138 4984 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.552618 4984 generic.go:334] "Generic (PLEG): container finished" podID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerID="955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b" exitCode=0 Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.552702 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbcb3e98-2063-421d-a76f-bca749fa2824","Type":"ContainerDied","Data":"955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b"} Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560230 4984 generic.go:334] "Generic (PLEG): container finished" podID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" exitCode=0 Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560288 4984 generic.go:334] "Generic (PLEG): container finished" podID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" exitCode=143 Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560310 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerDied","Data":"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a"} Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560340 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560412 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerDied","Data":"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264"} Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560439 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerDied","Data":"4144ee9c2a28e71f131e6c10f223d1b110888ba0da851c6cf5c4df3303551826"} Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560468 4984 scope.go:117] "RemoveContainer" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.590464 4984 scope.go:117] "RemoveContainer" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.609572 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.628702 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644306 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644791 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-metadata" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644820 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-metadata" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644840 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-log" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644849 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-log" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644876 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="init" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644885 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="init" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644917 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="dnsmasq-dns" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644927 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="dnsmasq-dns" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644946 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c9c509-275d-47bc-81f8-755bab6b2be8" containerName="nova-manage" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644955 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c9c509-275d-47bc-81f8-755bab6b2be8" containerName="nova-manage" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.645168 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-log" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.645190 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="dnsmasq-dns" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.645212 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-metadata" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.645222 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c9c509-275d-47bc-81f8-755bab6b2be8" containerName="nova-manage" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.646338 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.652561 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.652770 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.660877 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.670655 4984 scope.go:117] "RemoveContainer" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.671018 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": container with ID starting with 94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a not found: ID does not exist" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.671051 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a"} err="failed to get container status \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": rpc error: code = NotFound desc = could not find container \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": container with ID starting with 94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a not found: ID does not exist" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.671072 4984 scope.go:117] "RemoveContainer" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.672620 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": container with ID starting with ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264 not found: ID does not exist" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.672651 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264"} err="failed to get container status \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": rpc error: code = NotFound desc = could not find container \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": container with ID starting with ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264 not found: ID does not exist" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.672669 4984 scope.go:117] "RemoveContainer" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.674327 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a"} err="failed to get container status \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": rpc error: code = NotFound desc = could not find container \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": container with ID starting with 94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a not found: ID does not exist" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.674355 4984 scope.go:117] "RemoveContainer" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.677427 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264"} err="failed to get container status \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": rpc error: code = NotFound desc = could not find container \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": container with ID starting with ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264 not found: ID does not exist" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750617 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750683 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750762 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750808 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750830 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.812410 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.852654 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") pod \"cbcb3e98-2063-421d-a76f-bca749fa2824\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.853018 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") pod \"cbcb3e98-2063-421d-a76f-bca749fa2824\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.853272 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") pod \"cbcb3e98-2063-421d-a76f-bca749fa2824\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.853756 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.853898 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.854091 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.854172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.854300 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.854394 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.858091 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.858576 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.858754 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb" (OuterVolumeSpecName: "kube-api-access-b9cvb") pod "cbcb3e98-2063-421d-a76f-bca749fa2824" (UID: "cbcb3e98-2063-421d-a76f-bca749fa2824"). InnerVolumeSpecName "kube-api-access-b9cvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.859508 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.870204 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.882436 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbcb3e98-2063-421d-a76f-bca749fa2824" (UID: "cbcb3e98-2063-421d-a76f-bca749fa2824"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.892371 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data" (OuterVolumeSpecName: "config-data") pod "cbcb3e98-2063-421d-a76f-bca749fa2824" (UID: "cbcb3e98-2063-421d-a76f-bca749fa2824"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.956889 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.956962 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.956978 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.974322 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.143833 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" path="/var/lib/kubelet/pods/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0/volumes" Jan 30 10:33:22 crc kubenswrapper[4984]: W0130 10:33:22.473643 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda40bafb7_7a35_49bc_aaed_9249967a6da1.slice/crio-390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72 WatchSource:0}: Error finding container 390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72: Status 404 returned error can't find the container with id 390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72 Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.473736 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.573333 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerStarted","Data":"390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72"} Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.581831 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbcb3e98-2063-421d-a76f-bca749fa2824","Type":"ContainerDied","Data":"075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d"} Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.582202 4984 scope.go:117] "RemoveContainer" containerID="955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.582086 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.616363 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.636390 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.644819 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: E0130 10:33:22.645162 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerName="nova-scheduler-scheduler" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.645178 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerName="nova-scheduler-scheduler" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.645393 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerName="nova-scheduler-scheduler" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.646115 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.650256 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.654051 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.774440 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.774493 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.774592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.876505 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.876569 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.876670 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.880670 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.882440 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.894985 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.968287 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:23 crc kubenswrapper[4984]: I0130 10:33:23.440112 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:23 crc kubenswrapper[4984]: W0130 10:33:23.445221 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f8d034_e2e3_4db8_85b8_00459162d5ef.slice/crio-20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3 WatchSource:0}: Error finding container 20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3: Status 404 returned error can't find the container with id 20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3 Jan 30 10:33:23 crc kubenswrapper[4984]: I0130 10:33:23.595219 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerStarted","Data":"c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe"} Jan 30 10:33:23 crc kubenswrapper[4984]: I0130 10:33:23.595533 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerStarted","Data":"07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75"} Jan 30 10:33:23 crc kubenswrapper[4984]: I0130 10:33:23.597449 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50f8d034-e2e3-4db8-85b8-00459162d5ef","Type":"ContainerStarted","Data":"20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3"} Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.103121 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" path="/var/lib/kubelet/pods/cbcb3e98-2063-421d-a76f-bca749fa2824/volumes" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.448862 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.465307 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.465227447 podStartE2EDuration="3.465227447s" podCreationTimestamp="2026-01-30 10:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:23.616776084 +0000 UTC m=+1308.183079918" watchObservedRunningTime="2026-01-30 10:33:24.465227447 +0000 UTC m=+1309.031531271" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.504910 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") pod \"53602417-9f58-4125-ae4e-50a4acbd15c6\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.505009 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") pod \"53602417-9f58-4125-ae4e-50a4acbd15c6\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.505040 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") pod \"53602417-9f58-4125-ae4e-50a4acbd15c6\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.505156 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") pod \"53602417-9f58-4125-ae4e-50a4acbd15c6\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.506102 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs" (OuterVolumeSpecName: "logs") pod "53602417-9f58-4125-ae4e-50a4acbd15c6" (UID: "53602417-9f58-4125-ae4e-50a4acbd15c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.510206 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh" (OuterVolumeSpecName: "kube-api-access-wlhnh") pod "53602417-9f58-4125-ae4e-50a4acbd15c6" (UID: "53602417-9f58-4125-ae4e-50a4acbd15c6"). InnerVolumeSpecName "kube-api-access-wlhnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.537523 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data" (OuterVolumeSpecName: "config-data") pod "53602417-9f58-4125-ae4e-50a4acbd15c6" (UID: "53602417-9f58-4125-ae4e-50a4acbd15c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.542820 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53602417-9f58-4125-ae4e-50a4acbd15c6" (UID: "53602417-9f58-4125-ae4e-50a4acbd15c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.606729 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.606757 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.606766 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.606775 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.608802 4984 generic.go:334] "Generic (PLEG): container finished" podID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerID="5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" exitCode=0 Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.608960 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerDied","Data":"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978"} Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.608987 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerDied","Data":"ecd4903f8d6e5a12e35abf7e02e0342af660b1313797fb073846a8e0fffb44cd"} Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.609002 4984 scope.go:117] "RemoveContainer" containerID="5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.609098 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.621861 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50f8d034-e2e3-4db8-85b8-00459162d5ef","Type":"ContainerStarted","Data":"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee"} Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.644239 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.644223645 podStartE2EDuration="2.644223645s" podCreationTimestamp="2026-01-30 10:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:24.640638878 +0000 UTC m=+1309.206942702" watchObservedRunningTime="2026-01-30 10:33:24.644223645 +0000 UTC m=+1309.210527469" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.665561 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.669886 4984 scope.go:117] "RemoveContainer" containerID="5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.675013 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.696225 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:24 crc kubenswrapper[4984]: E0130 10:33:24.696847 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.696870 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" Jan 30 10:33:24 crc kubenswrapper[4984]: E0130 10:33:24.696890 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.696898 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.697090 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.697110 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.701178 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.704225 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.718426 4984 scope.go:117] "RemoveContainer" containerID="5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" Jan 30 10:33:24 crc kubenswrapper[4984]: E0130 10:33:24.719459 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978\": container with ID starting with 5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978 not found: ID does not exist" containerID="5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.719502 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978"} err="failed to get container status \"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978\": rpc error: code = NotFound desc = could not find container \"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978\": container with ID starting with 5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978 not found: ID does not exist" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.719531 4984 scope.go:117] "RemoveContainer" containerID="5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" Jan 30 10:33:24 crc kubenswrapper[4984]: E0130 10:33:24.719785 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a\": container with ID starting with 5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a not found: ID does not exist" containerID="5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.719807 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a"} err="failed to get container status \"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a\": rpc error: code = NotFound desc = could not find container \"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a\": container with ID starting with 5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a not found: ID does not exist" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.733157 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.809668 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.810016 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.810125 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.810209 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.912262 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.912354 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.912392 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.912441 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.913972 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.921373 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.921552 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.944687 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:25 crc kubenswrapper[4984]: I0130 10:33:25.029562 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:25 crc kubenswrapper[4984]: I0130 10:33:25.345463 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:25 crc kubenswrapper[4984]: W0130 10:33:25.347932 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod808d797f_903f_4730_a470_4f78f53409ae.slice/crio-e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36 WatchSource:0}: Error finding container e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36: Status 404 returned error can't find the container with id e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36 Jan 30 10:33:25 crc kubenswrapper[4984]: I0130 10:33:25.631384 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerStarted","Data":"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4"} Jan 30 10:33:25 crc kubenswrapper[4984]: I0130 10:33:25.631442 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerStarted","Data":"e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36"} Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.105141 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" path="/var/lib/kubelet/pods/53602417-9f58-4125-ae4e-50a4acbd15c6/volumes" Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.642919 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerStarted","Data":"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791"} Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.667043 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.667025261 podStartE2EDuration="2.667025261s" podCreationTimestamp="2026-01-30 10:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:26.665281583 +0000 UTC m=+1311.231585407" watchObservedRunningTime="2026-01-30 10:33:26.667025261 +0000 UTC m=+1311.233329085" Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.974949 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.975345 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 10:33:27 crc kubenswrapper[4984]: I0130 10:33:27.969209 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 10:33:30 crc kubenswrapper[4984]: I0130 10:33:30.567846 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 10:33:31 crc kubenswrapper[4984]: I0130 10:33:31.975587 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 10:33:31 crc kubenswrapper[4984]: I0130 10:33:31.975988 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 10:33:32 crc kubenswrapper[4984]: I0130 10:33:32.968886 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.002140 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.058495 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.058572 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.058781 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.058812 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.741315 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.001203 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.001458 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerName="kube-state-metrics" containerID="cri-o://5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" gracePeriod=30 Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.030898 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.030947 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.554194 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.733917 4984 generic.go:334] "Generic (PLEG): container finished" podID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerID="5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" exitCode=2 Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.733982 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d180dfe-bc61-4961-b672-20c6ff8c2911","Type":"ContainerDied","Data":"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822"} Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.734008 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d180dfe-bc61-4961-b672-20c6ff8c2911","Type":"ContainerDied","Data":"f05fd5917bae61700291c3765574cc3a3b08139624adb6fb3ccd5f7058c55fa6"} Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.734023 4984 scope.go:117] "RemoveContainer" containerID="5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.734130 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.736517 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") pod \"2d180dfe-bc61-4961-b672-20c6ff8c2911\" (UID: \"2d180dfe-bc61-4961-b672-20c6ff8c2911\") " Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.741903 4984 generic.go:334] "Generic (PLEG): container finished" podID="6148a148-07c4-4584-95ff-10d5e5147954" containerID="01f24060ed65c8e2bd6475cb81b1d352cdc388008c24396c142500998835d3df" exitCode=0 Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.741952 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" event={"ID":"6148a148-07c4-4584-95ff-10d5e5147954","Type":"ContainerDied","Data":"01f24060ed65c8e2bd6475cb81b1d352cdc388008c24396c142500998835d3df"} Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.749641 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h" (OuterVolumeSpecName: "kube-api-access-psg4h") pod "2d180dfe-bc61-4961-b672-20c6ff8c2911" (UID: "2d180dfe-bc61-4961-b672-20c6ff8c2911"). InnerVolumeSpecName "kube-api-access-psg4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.817768 4984 scope.go:117] "RemoveContainer" containerID="5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" Jan 30 10:33:35 crc kubenswrapper[4984]: E0130 10:33:35.818845 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822\": container with ID starting with 5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822 not found: ID does not exist" containerID="5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.818888 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822"} err="failed to get container status \"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822\": rpc error: code = NotFound desc = could not find container \"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822\": container with ID starting with 5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822 not found: ID does not exist" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.838828 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.077177 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.089301 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.112581 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" path="/var/lib/kubelet/pods/2d180dfe-bc61-4961-b672-20c6ff8c2911/volumes" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113302 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113433 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:36 crc kubenswrapper[4984]: E0130 10:33:36.113650 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerName="kube-state-metrics" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113668 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerName="kube-state-metrics" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113715 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113861 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerName="kube-state-metrics" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.114661 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.117693 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.117890 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.123303 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.250196 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.250300 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tr2b\" (UniqueName: \"kubernetes.io/projected/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-api-access-9tr2b\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.250555 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.250660 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.352313 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.352395 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.352450 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tr2b\" (UniqueName: \"kubernetes.io/projected/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-api-access-9tr2b\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.352576 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.373558 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.373650 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.373732 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.377711 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tr2b\" (UniqueName: \"kubernetes.io/projected/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-api-access-9tr2b\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.443367 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.946772 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.043869 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057289 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057614 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-central-agent" containerID="cri-o://33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce" gracePeriod=30 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057776 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="proxy-httpd" containerID="cri-o://d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179" gracePeriod=30 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057833 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="sg-core" containerID="cri-o://26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f" gracePeriod=30 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057875 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-notification-agent" containerID="cri-o://6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb" gracePeriod=30 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.083758 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") pod \"6148a148-07c4-4584-95ff-10d5e5147954\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.084059 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") pod \"6148a148-07c4-4584-95ff-10d5e5147954\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.084236 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") pod \"6148a148-07c4-4584-95ff-10d5e5147954\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.084306 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") pod \"6148a148-07c4-4584-95ff-10d5e5147954\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.094706 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg" (OuterVolumeSpecName: "kube-api-access-68tjg") pod "6148a148-07c4-4584-95ff-10d5e5147954" (UID: "6148a148-07c4-4584-95ff-10d5e5147954"). InnerVolumeSpecName "kube-api-access-68tjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.100416 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts" (OuterVolumeSpecName: "scripts") pod "6148a148-07c4-4584-95ff-10d5e5147954" (UID: "6148a148-07c4-4584-95ff-10d5e5147954"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.124012 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6148a148-07c4-4584-95ff-10d5e5147954" (UID: "6148a148-07c4-4584-95ff-10d5e5147954"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.137593 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data" (OuterVolumeSpecName: "config-data") pod "6148a148-07c4-4584-95ff-10d5e5147954" (UID: "6148a148-07c4-4584-95ff-10d5e5147954"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.190226 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.190557 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.190573 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.190585 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.764682 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa","Type":"ContainerStarted","Data":"d27c80cb732878754ae4da63499033ddf9f11a0242673f07d88fc64594e78890"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.764755 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa","Type":"ContainerStarted","Data":"735c4f2833a6a7a155fb0b86bc128f1b0093f6574118e6330b7e8b132f11d425"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.765921 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.767277 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" event={"ID":"6148a148-07c4-4584-95ff-10d5e5147954","Type":"ContainerDied","Data":"7460d26f16ced1d1e6a9ddf520dce3ce58c888acd0fc9117f073f9d56ecfe696"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.767301 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7460d26f16ced1d1e6a9ddf520dce3ce58c888acd0fc9117f073f9d56ecfe696" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.767338 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773215 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerID="d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179" exitCode=0 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773269 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerID="26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f" exitCode=2 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773282 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerID="33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce" exitCode=0 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773309 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773337 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773351 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.789484 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.415074564 podStartE2EDuration="1.789465002s" podCreationTimestamp="2026-01-30 10:33:36 +0000 UTC" firstStartedPulling="2026-01-30 10:33:36.964670038 +0000 UTC m=+1321.530973862" lastFinishedPulling="2026-01-30 10:33:37.339060476 +0000 UTC m=+1321.905364300" observedRunningTime="2026-01-30 10:33:37.784047697 +0000 UTC m=+1322.350351521" watchObservedRunningTime="2026-01-30 10:33:37.789465002 +0000 UTC m=+1322.355768826" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.849691 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 10:33:37 crc kubenswrapper[4984]: E0130 10:33:37.850112 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6148a148-07c4-4584-95ff-10d5e5147954" containerName="nova-cell1-conductor-db-sync" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.850145 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6148a148-07c4-4584-95ff-10d5e5147954" containerName="nova-cell1-conductor-db-sync" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.850393 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6148a148-07c4-4584-95ff-10d5e5147954" containerName="nova-cell1-conductor-db-sync" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.851114 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.856730 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.870537 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.938180 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.938235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7sw\" (UniqueName: \"kubernetes.io/projected/5b097926-177e-428a-a271-ede45f90f7d6-kube-api-access-fn7sw\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.938426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.040496 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.040683 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.040719 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7sw\" (UniqueName: \"kubernetes.io/projected/5b097926-177e-428a-a271-ede45f90f7d6-kube-api-access-fn7sw\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.049003 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.060476 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.080890 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7sw\" (UniqueName: \"kubernetes.io/projected/5b097926-177e-428a-a271-ede45f90f7d6-kube-api-access-fn7sw\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.166193 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: W0130 10:33:38.670749 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b097926_177e_428a_a271_ede45f90f7d6.slice/crio-bbb4e909e3985f8e88e4fcf4d73405385da24585e7bbe160a935fdeb9a51b0d0 WatchSource:0}: Error finding container bbb4e909e3985f8e88e4fcf4d73405385da24585e7bbe160a935fdeb9a51b0d0: Status 404 returned error can't find the container with id bbb4e909e3985f8e88e4fcf4d73405385da24585e7bbe160a935fdeb9a51b0d0 Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.682961 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.790650 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5b097926-177e-428a-a271-ede45f90f7d6","Type":"ContainerStarted","Data":"bbb4e909e3985f8e88e4fcf4d73405385da24585e7bbe160a935fdeb9a51b0d0"} Jan 30 10:33:39 crc kubenswrapper[4984]: I0130 10:33:39.802447 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5b097926-177e-428a-a271-ede45f90f7d6","Type":"ContainerStarted","Data":"2305d9ce55d27314f10eb520e705ef9a5bf155953791d41477a95951fc2306ef"} Jan 30 10:33:39 crc kubenswrapper[4984]: I0130 10:33:39.827455 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.827437787 podStartE2EDuration="2.827437787s" podCreationTimestamp="2026-01-30 10:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:39.817502279 +0000 UTC m=+1324.383806123" watchObservedRunningTime="2026-01-30 10:33:39.827437787 +0000 UTC m=+1324.393741611" Jan 30 10:33:40 crc kubenswrapper[4984]: I0130 10:33:40.811113 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.823185 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerID="6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb" exitCode=0 Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.824476 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb"} Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.824507 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4"} Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.824517 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.848958 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.982173 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.984398 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.994375 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009706 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009778 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009816 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009869 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009893 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.010112 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.010204 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.010593 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.010787 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.011442 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.011463 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.015446 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm" (OuterVolumeSpecName: "kube-api-access-kg4jm") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "kube-api-access-kg4jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.028366 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts" (OuterVolumeSpecName: "scripts") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.044288 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.114415 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.114636 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.114721 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.186428 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.193456 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data" (OuterVolumeSpecName: "config-data") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.222906 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.222946 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.832353 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.843280 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.890181 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.897916 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.932464 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:42 crc kubenswrapper[4984]: E0130 10:33:42.933052 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-notification-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933072 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-notification-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: E0130 10:33:42.933101 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-central-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933109 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-central-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: E0130 10:33:42.933133 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="proxy-httpd" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933141 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="proxy-httpd" Jan 30 10:33:42 crc kubenswrapper[4984]: E0130 10:33:42.933158 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="sg-core" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933165 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="sg-core" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933428 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="proxy-httpd" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933454 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-central-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933478 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="sg-core" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933490 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-notification-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.937819 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.940776 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.941076 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.941190 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.952155 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141497 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141543 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141610 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141676 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141724 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141785 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141814 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141882 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.195130 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244269 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244333 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244696 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244732 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244805 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244834 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244911 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.246202 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.246667 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.250268 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.251933 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.252043 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.252710 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.253154 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.266209 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.291091 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.838841 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.103769 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" path="/var/lib/kubelet/pods/5c10d6ea-d3d3-49cf-8185-0b4946edc4be/volumes" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.760358 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.849858 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068"} Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.849903 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"53c0f3d53985bd14e3af60b8e4782ac2a173cbd158ba349d861221960b04dc9d"} Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851299 4984 generic.go:334] "Generic (PLEG): container finished" podID="d02652b8-5031-4209-b2e7-228742c7a308" containerID="3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" exitCode=137 Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851379 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d02652b8-5031-4209-b2e7-228742c7a308","Type":"ContainerDied","Data":"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940"} Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851393 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851411 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d02652b8-5031-4209-b2e7-228742c7a308","Type":"ContainerDied","Data":"af5529722b6cfea0d21c483516240a6c61e08bb8fa1bfc0ece4e5fb90209726f"} Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851429 4984 scope.go:117] "RemoveContainer" containerID="3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.899971 4984 scope.go:117] "RemoveContainer" containerID="3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" Jan 30 10:33:44 crc kubenswrapper[4984]: E0130 10:33:44.900455 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940\": container with ID starting with 3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940 not found: ID does not exist" containerID="3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.900503 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940"} err="failed to get container status \"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940\": rpc error: code = NotFound desc = could not find container \"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940\": container with ID starting with 3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940 not found: ID does not exist" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.916159 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") pod \"d02652b8-5031-4209-b2e7-228742c7a308\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.916639 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") pod \"d02652b8-5031-4209-b2e7-228742c7a308\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.916732 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") pod \"d02652b8-5031-4209-b2e7-228742c7a308\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.921449 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv" (OuterVolumeSpecName: "kube-api-access-4klqv") pod "d02652b8-5031-4209-b2e7-228742c7a308" (UID: "d02652b8-5031-4209-b2e7-228742c7a308"). InnerVolumeSpecName "kube-api-access-4klqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.940755 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data" (OuterVolumeSpecName: "config-data") pod "d02652b8-5031-4209-b2e7-228742c7a308" (UID: "d02652b8-5031-4209-b2e7-228742c7a308"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.946656 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d02652b8-5031-4209-b2e7-228742c7a308" (UID: "d02652b8-5031-4209-b2e7-228742c7a308"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.018500 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.018531 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.018543 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.034517 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.035022 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.039409 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.040648 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.295627 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.312291 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.324771 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:45 crc kubenswrapper[4984]: E0130 10:33:45.325441 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02652b8-5031-4209-b2e7-228742c7a308" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.325469 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02652b8-5031-4209-b2e7-228742c7a308" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.325689 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02652b8-5031-4209-b2e7-228742c7a308" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.326482 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.337513 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.337900 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.339013 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.353404 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.430887 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.431021 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.431075 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.431108 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.431212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk2q5\" (UniqueName: \"kubernetes.io/projected/3933f23e-210c-483f-82ec-eb0cdbc09f4c-kube-api-access-bk2q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.532801 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.532854 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.532880 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.532954 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk2q5\" (UniqueName: \"kubernetes.io/projected/3933f23e-210c-483f-82ec-eb0cdbc09f4c-kube-api-access-bk2q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.533022 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.537062 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.537384 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.537851 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.539557 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.548005 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk2q5\" (UniqueName: \"kubernetes.io/projected/3933f23e-210c-483f-82ec-eb0cdbc09f4c-kube-api-access-bk2q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.652113 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.965211 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc"} Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.965557 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.030574 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.197789 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02652b8-5031-4209-b2e7-228742c7a308" path="/var/lib/kubelet/pods/d02652b8-5031-4209-b2e7-228742c7a308/volumes" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.232206 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.234222 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.242307 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:33:46 crc kubenswrapper[4984]: W0130 10:33:46.328746 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3933f23e_210c_483f_82ec_eb0cdbc09f4c.slice/crio-f38423796de70a966ca9b707d7e9fd6553a8885ee13fc06f3a18025f53dc4d54 WatchSource:0}: Error finding container f38423796de70a966ca9b707d7e9fd6553a8885ee13fc06f3a18025f53dc4d54: Status 404 returned error can't find the container with id f38423796de70a966ca9b707d7e9fd6553a8885ee13fc06f3a18025f53dc4d54 Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.331343 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369269 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369378 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369404 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369449 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369471 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369567 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.470550 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471626 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471680 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471739 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471764 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471800 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471821 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.472642 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.473797 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.473955 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.474230 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.474415 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.491355 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.567669 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.980975 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3933f23e-210c-483f-82ec-eb0cdbc09f4c","Type":"ContainerStarted","Data":"213bdfe85637ff917d49ef7851de52eda84aa268b35f0e0bf7c6811943ca822f"} Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.981387 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3933f23e-210c-483f-82ec-eb0cdbc09f4c","Type":"ContainerStarted","Data":"f38423796de70a966ca9b707d7e9fd6553a8885ee13fc06f3a18025f53dc4d54"} Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.994531 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291"} Jan 30 10:33:47 crc kubenswrapper[4984]: I0130 10:33:47.014233 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.014217504 podStartE2EDuration="2.014217504s" podCreationTimestamp="2026-01-30 10:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:47.004866842 +0000 UTC m=+1331.571170666" watchObservedRunningTime="2026-01-30 10:33:47.014217504 +0000 UTC m=+1331.580521328" Jan 30 10:33:47 crc kubenswrapper[4984]: I0130 10:33:47.053333 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:33:48 crc kubenswrapper[4984]: I0130 10:33:48.011582 4984 generic.go:334] "Generic (PLEG): container finished" podID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerID="6f684411c439001a58a467c45183371d748f6a158f135c5dea4ecaa3e03b6d12" exitCode=0 Jan 30 10:33:48 crc kubenswrapper[4984]: I0130 10:33:48.011650 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerDied","Data":"6f684411c439001a58a467c45183371d748f6a158f135c5dea4ecaa3e03b6d12"} Jan 30 10:33:48 crc kubenswrapper[4984]: I0130 10:33:48.012208 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerStarted","Data":"3a743fd4af77fa8320a0aa82fc1ee65e702a095968f1a2be7dbc346d0b4f3fe2"} Jan 30 10:33:48 crc kubenswrapper[4984]: I0130 10:33:48.071280 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.023101 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerStarted","Data":"e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1"} Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.023526 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.035907 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1"} Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.036019 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.036167 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" containerID="cri-o://af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" gracePeriod=30 Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.036189 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" containerID="cri-o://a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" gracePeriod=30 Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.063975 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" podStartSLOduration=3.063956565 podStartE2EDuration="3.063956565s" podCreationTimestamp="2026-01-30 10:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:49.056741781 +0000 UTC m=+1333.623045605" watchObservedRunningTime="2026-01-30 10:33:49.063956565 +0000 UTC m=+1333.630260389" Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.085987 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.069671329 podStartE2EDuration="7.085970309s" podCreationTimestamp="2026-01-30 10:33:42 +0000 UTC" firstStartedPulling="2026-01-30 10:33:43.844279171 +0000 UTC m=+1328.410582995" lastFinishedPulling="2026-01-30 10:33:47.860578131 +0000 UTC m=+1332.426881975" observedRunningTime="2026-01-30 10:33:49.084040977 +0000 UTC m=+1333.650344801" watchObservedRunningTime="2026-01-30 10:33:49.085970309 +0000 UTC m=+1333.652274133" Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.157082 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:50 crc kubenswrapper[4984]: I0130 10:33:50.046062 4984 generic.go:334] "Generic (PLEG): container finished" podID="808d797f-903f-4730-a470-4f78f53409ae" containerID="af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" exitCode=143 Jan 30 10:33:50 crc kubenswrapper[4984]: I0130 10:33:50.046150 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerDied","Data":"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4"} Jan 30 10:33:50 crc kubenswrapper[4984]: I0130 10:33:50.652738 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:51 crc kubenswrapper[4984]: I0130 10:33:51.055361 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-central-agent" containerID="cri-o://7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" gracePeriod=30 Jan 30 10:33:51 crc kubenswrapper[4984]: I0130 10:33:51.055431 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="sg-core" containerID="cri-o://4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" gracePeriod=30 Jan 30 10:33:51 crc kubenswrapper[4984]: I0130 10:33:51.055470 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="proxy-httpd" containerID="cri-o://ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" gracePeriod=30 Jan 30 10:33:51 crc kubenswrapper[4984]: I0130 10:33:51.055501 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-notification-agent" containerID="cri-o://b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" gracePeriod=30 Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.078965 4984 generic.go:334] "Generic (PLEG): container finished" podID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerID="ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" exitCode=0 Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079299 4984 generic.go:334] "Generic (PLEG): container finished" podID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerID="4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" exitCode=2 Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079312 4984 generic.go:334] "Generic (PLEG): container finished" podID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerID="b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" exitCode=0 Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079007 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1"} Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079349 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291"} Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079364 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc"} Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.725303 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.806854 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") pod \"808d797f-903f-4730-a470-4f78f53409ae\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.806967 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") pod \"808d797f-903f-4730-a470-4f78f53409ae\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.807109 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") pod \"808d797f-903f-4730-a470-4f78f53409ae\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.807203 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") pod \"808d797f-903f-4730-a470-4f78f53409ae\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.807557 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs" (OuterVolumeSpecName: "logs") pod "808d797f-903f-4730-a470-4f78f53409ae" (UID: "808d797f-903f-4730-a470-4f78f53409ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.808164 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.824484 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r" (OuterVolumeSpecName: "kube-api-access-x762r") pod "808d797f-903f-4730-a470-4f78f53409ae" (UID: "808d797f-903f-4730-a470-4f78f53409ae"). InnerVolumeSpecName "kube-api-access-x762r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.858363 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "808d797f-903f-4730-a470-4f78f53409ae" (UID: "808d797f-903f-4730-a470-4f78f53409ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.910428 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data" (OuterVolumeSpecName: "config-data") pod "808d797f-903f-4730-a470-4f78f53409ae" (UID: "808d797f-903f-4730-a470-4f78f53409ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.910741 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.910783 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.910795 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.092086 4984 generic.go:334] "Generic (PLEG): container finished" podID="808d797f-903f-4730-a470-4f78f53409ae" containerID="a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" exitCode=0 Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.092162 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.092925 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerDied","Data":"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791"} Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.097459 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerDied","Data":"e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36"} Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.097502 4984 scope.go:117] "RemoveContainer" containerID="a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.129650 4984 scope.go:117] "RemoveContainer" containerID="af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.151462 4984 scope.go:117] "RemoveContainer" containerID="a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" Jan 30 10:33:53 crc kubenswrapper[4984]: E0130 10:33:53.152031 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791\": container with ID starting with a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791 not found: ID does not exist" containerID="a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.152123 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791"} err="failed to get container status \"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791\": rpc error: code = NotFound desc = could not find container \"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791\": container with ID starting with a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791 not found: ID does not exist" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.152192 4984 scope.go:117] "RemoveContainer" containerID="af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" Jan 30 10:33:53 crc kubenswrapper[4984]: E0130 10:33:53.152694 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4\": container with ID starting with af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4 not found: ID does not exist" containerID="af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.152728 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4"} err="failed to get container status \"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4\": rpc error: code = NotFound desc = could not find container \"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4\": container with ID starting with af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4 not found: ID does not exist" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.162334 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.170993 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.179422 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:53 crc kubenswrapper[4984]: E0130 10:33:53.181559 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.181589 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" Jan 30 10:33:53 crc kubenswrapper[4984]: E0130 10:33:53.181615 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.181624 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.181900 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.181935 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.183167 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.190730 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.190831 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.190852 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.191024 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.325437 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.325848 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.325872 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.325937 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.326160 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.326230 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.437804 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.437933 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.437957 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.438058 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.438126 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.438156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.440980 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.445446 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.454560 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.458133 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.459647 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.462842 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.510701 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.633287 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744409 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744490 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744533 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744601 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744688 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744776 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744837 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744866 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.746161 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.746569 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.749508 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw" (OuterVolumeSpecName: "kube-api-access-8s9mw") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "kube-api-access-8s9mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.750030 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts" (OuterVolumeSpecName: "scripts") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.784178 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.811277 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847043 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847079 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847095 4984 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847107 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847120 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847132 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.851030 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data" (OuterVolumeSpecName: "config-data") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.853515 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.949463 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.949519 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.008232 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.112649 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808d797f-903f-4730-a470-4f78f53409ae" path="/var/lib/kubelet/pods/808d797f-903f-4730-a470-4f78f53409ae/volumes" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.112852 4984 generic.go:334] "Generic (PLEG): container finished" podID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerID="7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" exitCode=0 Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.112970 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.114864 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerStarted","Data":"4c5623a8a80ebe39b23eb64dff70541e70da513adb322fa0a48cea9507d68a53"} Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.115237 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068"} Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.115661 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"53c0f3d53985bd14e3af60b8e4782ac2a173cbd158ba349d861221960b04dc9d"} Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.115694 4984 scope.go:117] "RemoveContainer" containerID="ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.171409 4984 scope.go:117] "RemoveContainer" containerID="4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.176512 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.193442 4984 scope.go:117] "RemoveContainer" containerID="b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.193607 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205104 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.205737 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-central-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205767 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-central-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.205805 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-notification-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205817 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-notification-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.205857 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="proxy-httpd" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205870 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="proxy-httpd" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.205899 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="sg-core" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205910 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="sg-core" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.206133 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-central-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.206186 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-notification-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.206207 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="sg-core" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.206228 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="proxy-httpd" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.208489 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.210868 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.211550 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.212449 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.224330 4984 scope.go:117] "RemoveContainer" containerID="7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.235340 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.251677 4984 scope.go:117] "RemoveContainer" containerID="ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.252163 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1\": container with ID starting with ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1 not found: ID does not exist" containerID="ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.252217 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1"} err="failed to get container status \"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1\": rpc error: code = NotFound desc = could not find container \"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1\": container with ID starting with ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1 not found: ID does not exist" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.252238 4984 scope.go:117] "RemoveContainer" containerID="4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.252703 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291\": container with ID starting with 4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291 not found: ID does not exist" containerID="4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.252744 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291"} err="failed to get container status \"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291\": rpc error: code = NotFound desc = could not find container \"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291\": container with ID starting with 4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291 not found: ID does not exist" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.252773 4984 scope.go:117] "RemoveContainer" containerID="b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.253135 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc\": container with ID starting with b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc not found: ID does not exist" containerID="b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.253162 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc"} err="failed to get container status \"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc\": rpc error: code = NotFound desc = could not find container \"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc\": container with ID starting with b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc not found: ID does not exist" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.253182 4984 scope.go:117] "RemoveContainer" containerID="7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.253522 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068\": container with ID starting with 7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068 not found: ID does not exist" containerID="7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.253543 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068"} err="failed to get container status \"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068\": rpc error: code = NotFound desc = could not find container \"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068\": container with ID starting with 7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068 not found: ID does not exist" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359139 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-run-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359181 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359220 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359584 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-config-data\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359806 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359886 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvfj\" (UniqueName: \"kubernetes.io/projected/aa8fceae-cb31-48dd-8104-9a905f788af6-kube-api-access-9kvfj\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359950 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-log-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.360030 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-scripts\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.461854 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-config-data\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462036 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462099 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvfj\" (UniqueName: \"kubernetes.io/projected/aa8fceae-cb31-48dd-8104-9a905f788af6-kube-api-access-9kvfj\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462164 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-log-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462235 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-scripts\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462405 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-run-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462458 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462531 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.463235 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-log-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.466575 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-run-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.467400 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-scripts\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.475163 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.476041 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.476763 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-config-data\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.479309 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.482475 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvfj\" (UniqueName: \"kubernetes.io/projected/aa8fceae-cb31-48dd-8104-9a905f788af6-kube-api-access-9kvfj\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.531086 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.037610 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:55 crc kubenswrapper[4984]: W0130 10:33:55.042440 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa8fceae_cb31_48dd_8104_9a905f788af6.slice/crio-e945f18c1f1e5986b211fa4eaa6dc5eaada98507d846b59fdc28efec9bcd6933 WatchSource:0}: Error finding container e945f18c1f1e5986b211fa4eaa6dc5eaada98507d846b59fdc28efec9bcd6933: Status 404 returned error can't find the container with id e945f18c1f1e5986b211fa4eaa6dc5eaada98507d846b59fdc28efec9bcd6933 Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.121917 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerStarted","Data":"f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978"} Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.121959 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerStarted","Data":"6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f"} Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.126173 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"e945f18c1f1e5986b211fa4eaa6dc5eaada98507d846b59fdc28efec9bcd6933"} Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.139999 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.139982006 podStartE2EDuration="2.139982006s" podCreationTimestamp="2026-01-30 10:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:55.139558975 +0000 UTC m=+1339.705862799" watchObservedRunningTime="2026-01-30 10:33:55.139982006 +0000 UTC m=+1339.706285830" Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.652906 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.670661 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.101553 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" path="/var/lib/kubelet/pods/8845436d-e0d5-400d-bb54-18e9ffcb036f/volumes" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.135922 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"d5bd4b62675a63131fb10128259c68f2b8a481a886085edc14e1177bb2781fd6"} Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.150242 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.343948 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.345337 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.350531 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.351030 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.364509 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.511156 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.511299 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.511418 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.511711 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.569403 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.620571 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.620625 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.620668 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.620905 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.629935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.631211 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.642869 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.643074 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.644415 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.644640 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="dnsmasq-dns" containerID="cri-o://dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f" gracePeriod=10 Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.758879 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.182578 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"4d33293ab1474476948154b916031a2bd87df947166d1d0df66c9f2a4ce3c9f7"} Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.189512 4984 generic.go:334] "Generic (PLEG): container finished" podID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerID="dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f" exitCode=0 Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.189595 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerDied","Data":"dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f"} Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.286480 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.461621 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.461791 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.461896 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.461975 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.462031 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.462078 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.486487 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72" (OuterVolumeSpecName: "kube-api-access-78r72") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "kube-api-access-78r72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.519190 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:33:57 crc kubenswrapper[4984]: W0130 10:33:57.519957 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda005f64f_9ec0_4a4a_b64e_9ae00924dce7.slice/crio-b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9 WatchSource:0}: Error finding container b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9: Status 404 returned error can't find the container with id b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9 Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.534967 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.536789 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config" (OuterVolumeSpecName: "config") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.559034 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564183 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564441 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564469 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564481 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564490 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564499 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.597551 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.666210 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.203992 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"03ef2dbacbf8edaa5e2363012639f1cbc9bed34f5839518cd1ff06c6fe10ae8a"} Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.205735 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerDied","Data":"d321da41062e4b6042ed3a9bb6a7b9877923a06f6b266f1b243b188fd84ea8bc"} Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.205796 4984 scope.go:117] "RemoveContainer" containerID="dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f" Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.205955 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.211424 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-489nm" event={"ID":"a005f64f-9ec0-4a4a-b64e-9ae00924dce7","Type":"ContainerStarted","Data":"899b94de134f9ceca80081ff737a83cc02c723d317671d240f22cc01fff73eb3"} Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.211480 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-489nm" event={"ID":"a005f64f-9ec0-4a4a-b64e-9ae00924dce7","Type":"ContainerStarted","Data":"b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9"} Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.241273 4984 scope.go:117] "RemoveContainer" containerID="aa1f69e5832486947c309113f3fb6a6493f2b91d3f8828fd6cfe76af73d8b0a8" Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.244709 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.259022 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.262775 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-489nm" podStartSLOduration=2.262760007 podStartE2EDuration="2.262760007s" podCreationTimestamp="2026-01-30 10:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:58.244726501 +0000 UTC m=+1342.811030345" watchObservedRunningTime="2026-01-30 10:33:58.262760007 +0000 UTC m=+1342.829063831" Jan 30 10:33:59 crc kubenswrapper[4984]: I0130 10:33:59.224839 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"2c2e2bb79ec3040a4ee01008f1909f5acb8f2c030b8c3dbd1d7e91118279ad0e"} Jan 30 10:33:59 crc kubenswrapper[4984]: I0130 10:33:59.225791 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:33:59 crc kubenswrapper[4984]: I0130 10:33:59.260557 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.432503524 podStartE2EDuration="5.260532996s" podCreationTimestamp="2026-01-30 10:33:54 +0000 UTC" firstStartedPulling="2026-01-30 10:33:55.046494585 +0000 UTC m=+1339.612798409" lastFinishedPulling="2026-01-30 10:33:58.874524047 +0000 UTC m=+1343.440827881" observedRunningTime="2026-01-30 10:33:59.249976223 +0000 UTC m=+1343.816280047" watchObservedRunningTime="2026-01-30 10:33:59.260532996 +0000 UTC m=+1343.826836820" Jan 30 10:34:00 crc kubenswrapper[4984]: I0130 10:34:00.105007 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" path="/var/lib/kubelet/pods/2bea2708-4bb8-48d3-ba2a-0b28a921c053/volumes" Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.000355 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.000969 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.295024 4984 generic.go:334] "Generic (PLEG): container finished" podID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" containerID="899b94de134f9ceca80081ff737a83cc02c723d317671d240f22cc01fff73eb3" exitCode=0 Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.295106 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-489nm" event={"ID":"a005f64f-9ec0-4a4a-b64e-9ae00924dce7","Type":"ContainerDied","Data":"899b94de134f9ceca80081ff737a83cc02c723d317671d240f22cc01fff73eb3"} Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.511439 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.511852 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.525502 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.525516 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.698677 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.796160 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") pod \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.796294 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") pod \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.796355 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") pod \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.796400 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") pod \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.804913 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb" (OuterVolumeSpecName: "kube-api-access-tmsbb") pod "a005f64f-9ec0-4a4a-b64e-9ae00924dce7" (UID: "a005f64f-9ec0-4a4a-b64e-9ae00924dce7"). InnerVolumeSpecName "kube-api-access-tmsbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.806776 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts" (OuterVolumeSpecName: "scripts") pod "a005f64f-9ec0-4a4a-b64e-9ae00924dce7" (UID: "a005f64f-9ec0-4a4a-b64e-9ae00924dce7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.831373 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data" (OuterVolumeSpecName: "config-data") pod "a005f64f-9ec0-4a4a-b64e-9ae00924dce7" (UID: "a005f64f-9ec0-4a4a-b64e-9ae00924dce7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.851867 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a005f64f-9ec0-4a4a-b64e-9ae00924dce7" (UID: "a005f64f-9ec0-4a4a-b64e-9ae00924dce7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.899230 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.899283 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.899293 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.899301 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.316506 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-489nm" event={"ID":"a005f64f-9ec0-4a4a-b64e-9ae00924dce7","Type":"ContainerDied","Data":"b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9"} Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.316568 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9" Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.316598 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.517197 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.517487 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" containerID="cri-o://6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f" gracePeriod=30 Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.517578 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" containerID="cri-o://f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978" gracePeriod=30 Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.550868 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.551180 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerName="nova-scheduler-scheduler" containerID="cri-o://4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" gracePeriod=30 Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.599022 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.599289 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" containerID="cri-o://07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75" gracePeriod=30 Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.599340 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" containerID="cri-o://c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe" gracePeriod=30 Jan 30 10:34:06 crc kubenswrapper[4984]: I0130 10:34:06.329443 4984 generic.go:334] "Generic (PLEG): container finished" podID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerID="07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75" exitCode=143 Jan 30 10:34:06 crc kubenswrapper[4984]: I0130 10:34:06.329532 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerDied","Data":"07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75"} Jan 30 10:34:06 crc kubenswrapper[4984]: I0130 10:34:06.332448 4984 generic.go:334] "Generic (PLEG): container finished" podID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerID="6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f" exitCode=143 Jan 30 10:34:06 crc kubenswrapper[4984]: I0130 10:34:06.332495 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerDied","Data":"6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f"} Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.131748 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.247682 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") pod \"50f8d034-e2e3-4db8-85b8-00459162d5ef\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.247740 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") pod \"50f8d034-e2e3-4db8-85b8-00459162d5ef\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.247778 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") pod \"50f8d034-e2e3-4db8-85b8-00459162d5ef\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.253698 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9" (OuterVolumeSpecName: "kube-api-access-xkfw9") pod "50f8d034-e2e3-4db8-85b8-00459162d5ef" (UID: "50f8d034-e2e3-4db8-85b8-00459162d5ef"). InnerVolumeSpecName "kube-api-access-xkfw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.281131 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50f8d034-e2e3-4db8-85b8-00459162d5ef" (UID: "50f8d034-e2e3-4db8-85b8-00459162d5ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.283536 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data" (OuterVolumeSpecName: "config-data") pod "50f8d034-e2e3-4db8-85b8-00459162d5ef" (UID: "50f8d034-e2e3-4db8-85b8-00459162d5ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342589 4984 generic.go:334] "Generic (PLEG): container finished" podID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerID="4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" exitCode=0 Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342636 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50f8d034-e2e3-4db8-85b8-00459162d5ef","Type":"ContainerDied","Data":"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee"} Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342663 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50f8d034-e2e3-4db8-85b8-00459162d5ef","Type":"ContainerDied","Data":"20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3"} Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342681 4984 scope.go:117] "RemoveContainer" containerID="4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342812 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.349847 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.349892 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.349906 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.369304 4984 scope.go:117] "RemoveContainer" containerID="4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.369736 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee\": container with ID starting with 4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee not found: ID does not exist" containerID="4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.369775 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee"} err="failed to get container status \"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee\": rpc error: code = NotFound desc = could not find container \"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee\": container with ID starting with 4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee not found: ID does not exist" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.379495 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.392753 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.404579 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.405011 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="dnsmasq-dns" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405055 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="dnsmasq-dns" Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.405104 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerName="nova-scheduler-scheduler" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405113 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerName="nova-scheduler-scheduler" Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.405126 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="init" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405133 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="init" Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.405146 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" containerName="nova-manage" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405154 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" containerName="nova-manage" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405396 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="dnsmasq-dns" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405434 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" containerName="nova-manage" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405445 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerName="nova-scheduler-scheduler" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.406057 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.407806 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.421279 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.452669 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-config-data\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.452734 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.452812 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvxn\" (UniqueName: \"kubernetes.io/projected/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-kube-api-access-7hvxn\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.553980 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.554028 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hvxn\" (UniqueName: \"kubernetes.io/projected/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-kube-api-access-7hvxn\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.554178 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-config-data\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.559410 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.569527 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-config-data\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.576619 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hvxn\" (UniqueName: \"kubernetes.io/projected/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-kube-api-access-7hvxn\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.726751 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:34:08 crc kubenswrapper[4984]: I0130 10:34:08.100142 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" path="/var/lib/kubelet/pods/50f8d034-e2e3-4db8-85b8-00459162d5ef/volumes" Jan 30 10:34:08 crc kubenswrapper[4984]: I0130 10:34:08.205633 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:08 crc kubenswrapper[4984]: I0130 10:34:08.355319 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d","Type":"ContainerStarted","Data":"7d749bda7e7a7bca71d05d7f9b3f8db75416cdbea8d46fb7f8b3052ddd84f2a5"} Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.039982 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:52814->10.217.0.195:8775: read: connection reset by peer" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.039978 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:52826->10.217.0.195:8775: read: connection reset by peer" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.365352 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d","Type":"ContainerStarted","Data":"5d25881e88c2b6ad6ee30ea1a57f43e5ca686440ff039679a2280372e21a5f14"} Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.368043 4984 generic.go:334] "Generic (PLEG): container finished" podID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerID="c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe" exitCode=0 Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.368082 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerDied","Data":"c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe"} Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.385688 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.385631412 podStartE2EDuration="2.385631412s" podCreationTimestamp="2026-01-30 10:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:34:09.381593973 +0000 UTC m=+1353.947897817" watchObservedRunningTime="2026-01-30 10:34:09.385631412 +0000 UTC m=+1353.951935236" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.486597 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.592115 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.593055 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.593293 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.593352 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.593381 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.594281 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs" (OuterVolumeSpecName: "logs") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.627373 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt" (OuterVolumeSpecName: "kube-api-access-t2xzt") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "kube-api-access-t2xzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.669421 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.679418 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data" (OuterVolumeSpecName: "config-data") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.688412 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696437 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696476 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696487 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696499 4984 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696508 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.384469 4984 generic.go:334] "Generic (PLEG): container finished" podID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerID="f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978" exitCode=0 Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.384537 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerDied","Data":"f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978"} Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.385116 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerDied","Data":"4c5623a8a80ebe39b23eb64dff70541e70da513adb322fa0a48cea9507d68a53"} Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.385135 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5623a8a80ebe39b23eb64dff70541e70da513adb322fa0a48cea9507d68a53" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.387462 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerDied","Data":"390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72"} Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.387475 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.387518 4984 scope.go:117] "RemoveContainer" containerID="c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.392438 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.418675 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.436537 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.442555 4984 scope.go:117] "RemoveContainer" containerID="07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460125 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:10 crc kubenswrapper[4984]: E0130 10:34:10.460520 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460539 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" Jan 30 10:34:10 crc kubenswrapper[4984]: E0130 10:34:10.460557 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460572 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" Jan 30 10:34:10 crc kubenswrapper[4984]: E0130 10:34:10.460587 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460595 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" Jan 30 10:34:10 crc kubenswrapper[4984]: E0130 10:34:10.460612 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460618 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460775 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460787 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460807 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460819 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.461786 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.465629 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.468919 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.495126 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519169 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519234 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519346 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519383 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519413 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519530 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520172 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs" (OuterVolumeSpecName: "logs") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520273 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skc88\" (UniqueName: \"kubernetes.io/projected/0538ab81-6e35-473d-860f-7f680671646d-kube-api-access-skc88\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520345 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520451 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-config-data\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520584 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520627 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0538ab81-6e35-473d-860f-7f680671646d-logs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520697 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.538983 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5" (OuterVolumeSpecName: "kube-api-access-bwsv5") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "kube-api-access-bwsv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.547468 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data" (OuterVolumeSpecName: "config-data") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.548321 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.567997 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.592402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622651 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622708 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0538ab81-6e35-473d-860f-7f680671646d-logs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622746 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skc88\" (UniqueName: \"kubernetes.io/projected/0538ab81-6e35-473d-860f-7f680671646d-kube-api-access-skc88\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622783 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622845 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-config-data\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622889 4984 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622902 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622912 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622922 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622930 4984 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.623568 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0538ab81-6e35-473d-860f-7f680671646d-logs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.626510 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-config-data\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.630566 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.632372 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.639226 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skc88\" (UniqueName: \"kubernetes.io/projected/0538ab81-6e35-473d-860f-7f680671646d-kube-api-access-skc88\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.788861 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.234818 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: W0130 10:34:11.237923 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0538ab81_6e35_473d_860f_7f680671646d.slice/crio-5e9e1ef98d4e412ec4e9e2b85c4abdc1901f2ea8bea18bcd640d7473b36a2718 WatchSource:0}: Error finding container 5e9e1ef98d4e412ec4e9e2b85c4abdc1901f2ea8bea18bcd640d7473b36a2718: Status 404 returned error can't find the container with id 5e9e1ef98d4e412ec4e9e2b85c4abdc1901f2ea8bea18bcd640d7473b36a2718 Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.398650 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0538ab81-6e35-473d-860f-7f680671646d","Type":"ContainerStarted","Data":"5e9e1ef98d4e412ec4e9e2b85c4abdc1901f2ea8bea18bcd640d7473b36a2718"} Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.398692 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.437407 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.458645 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.478194 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.480147 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.499880 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.500055 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.500265 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.533344 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.542967 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.543024 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-public-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.543108 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8b1830-c479-4612-a461-7cb46d2c949f-logs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.543139 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-config-data\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.543231 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mvd\" (UniqueName: \"kubernetes.io/projected/2a8b1830-c479-4612-a461-7cb46d2c949f-kube-api-access-v8mvd\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.544038 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647067 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647148 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647171 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-public-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647217 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8b1830-c479-4612-a461-7cb46d2c949f-logs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647335 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-config-data\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647411 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mvd\" (UniqueName: \"kubernetes.io/projected/2a8b1830-c479-4612-a461-7cb46d2c949f-kube-api-access-v8mvd\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.648571 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8b1830-c479-4612-a461-7cb46d2c949f-logs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.652073 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.652143 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.652091 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-public-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.653120 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-config-data\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.662496 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mvd\" (UniqueName: \"kubernetes.io/projected/2a8b1830-c479-4612-a461-7cb46d2c949f-kube-api-access-v8mvd\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.904850 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.105708 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" path="/var/lib/kubelet/pods/a40bafb7-7a35-49bc-aaed-9249967a6da1/volumes" Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.106802 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" path="/var/lib/kubelet/pods/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485/volumes" Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.408426 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0538ab81-6e35-473d-860f-7f680671646d","Type":"ContainerStarted","Data":"a8335fd2f11b9193688e84eb0431848813337f8f3c6d75ec631934bf7546301d"} Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.408475 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0538ab81-6e35-473d-860f-7f680671646d","Type":"ContainerStarted","Data":"4f38fabe5cf4a86cbd340031aaf5f118100f8b6a5a9fa7a12e6ed406075899d3"} Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.433171 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.433148023 podStartE2EDuration="2.433148023s" podCreationTimestamp="2026-01-30 10:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:34:12.424717226 +0000 UTC m=+1356.991021070" watchObservedRunningTime="2026-01-30 10:34:12.433148023 +0000 UTC m=+1356.999451847" Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.445628 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:12 crc kubenswrapper[4984]: W0130 10:34:12.447151 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a8b1830_c479_4612_a461_7cb46d2c949f.slice/crio-e5def199ca03421f7fbff48ff7d51dc3b177e2b4aec5c65e4b7dfed3697bf515 WatchSource:0}: Error finding container e5def199ca03421f7fbff48ff7d51dc3b177e2b4aec5c65e4b7dfed3697bf515: Status 404 returned error can't find the container with id e5def199ca03421f7fbff48ff7d51dc3b177e2b4aec5c65e4b7dfed3697bf515 Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.727278 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 10:34:13 crc kubenswrapper[4984]: I0130 10:34:13.424401 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a8b1830-c479-4612-a461-7cb46d2c949f","Type":"ContainerStarted","Data":"de62eede36878ed0878f6230a27dfd3a94597ff829a71f7dff2b1d8b31b44d12"} Jan 30 10:34:13 crc kubenswrapper[4984]: I0130 10:34:13.424472 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a8b1830-c479-4612-a461-7cb46d2c949f","Type":"ContainerStarted","Data":"290ee958368f349ab02fd730409b4470f27ac6db778ce926403d11407e9a0846"} Jan 30 10:34:13 crc kubenswrapper[4984]: I0130 10:34:13.424501 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a8b1830-c479-4612-a461-7cb46d2c949f","Type":"ContainerStarted","Data":"e5def199ca03421f7fbff48ff7d51dc3b177e2b4aec5c65e4b7dfed3697bf515"} Jan 30 10:34:13 crc kubenswrapper[4984]: I0130 10:34:13.456535 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.456507873 podStartE2EDuration="2.456507873s" podCreationTimestamp="2026-01-30 10:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:34:13.45193393 +0000 UTC m=+1358.018237754" watchObservedRunningTime="2026-01-30 10:34:13.456507873 +0000 UTC m=+1358.022811737" Jan 30 10:34:15 crc kubenswrapper[4984]: I0130 10:34:15.789143 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 10:34:15 crc kubenswrapper[4984]: I0130 10:34:15.789915 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 10:34:17 crc kubenswrapper[4984]: I0130 10:34:17.727496 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 10:34:17 crc kubenswrapper[4984]: I0130 10:34:17.751508 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 10:34:18 crc kubenswrapper[4984]: I0130 10:34:18.511289 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 10:34:20 crc kubenswrapper[4984]: I0130 10:34:20.789562 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 10:34:20 crc kubenswrapper[4984]: I0130 10:34:20.789913 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 10:34:21 crc kubenswrapper[4984]: I0130 10:34:21.803528 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0538ab81-6e35-473d-860f-7f680671646d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:21 crc kubenswrapper[4984]: I0130 10:34:21.803543 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0538ab81-6e35-473d-860f-7f680671646d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:21 crc kubenswrapper[4984]: I0130 10:34:21.906476 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:34:21 crc kubenswrapper[4984]: I0130 10:34:21.906791 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:34:22 crc kubenswrapper[4984]: I0130 10:34:22.922448 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2a8b1830-c479-4612-a461-7cb46d2c949f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:22 crc kubenswrapper[4984]: I0130 10:34:22.922445 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2a8b1830-c479-4612-a461-7cb46d2c949f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:24 crc kubenswrapper[4984]: I0130 10:34:24.546340 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 10:34:30 crc kubenswrapper[4984]: I0130 10:34:30.908435 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 10:34:30 crc kubenswrapper[4984]: I0130 10:34:30.908981 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 10:34:30 crc kubenswrapper[4984]: I0130 10:34:30.923355 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 10:34:30 crc kubenswrapper[4984]: I0130 10:34:30.923453 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.911532 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.911691 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.912396 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.912429 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.920702 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.923794 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.001113 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.001407 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.001446 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.002117 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.002167 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8" gracePeriod=600 Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.623502 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8" exitCode=0 Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.623591 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8"} Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.623889 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a"} Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.623912 4984 scope.go:117] "RemoveContainer" containerID="337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796" Jan 30 10:34:39 crc kubenswrapper[4984]: I0130 10:34:39.529799 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:40 crc kubenswrapper[4984]: I0130 10:34:40.371705 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:44 crc kubenswrapper[4984]: I0130 10:34:44.160176 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="rabbitmq" containerID="cri-o://9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" gracePeriod=604796 Jan 30 10:34:44 crc kubenswrapper[4984]: I0130 10:34:44.485547 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="rabbitmq" containerID="cri-o://53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a" gracePeriod=604796 Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.242901 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.246976 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.255575 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.335875 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.335962 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.336030 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438134 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438201 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438242 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438837 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.471071 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.574658 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:49 crc kubenswrapper[4984]: I0130 10:34:49.064709 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:34:49 crc kubenswrapper[4984]: I0130 10:34:49.777769 4984 generic.go:334] "Generic (PLEG): container finished" podID="931ec9af-3161-478a-9f45-556b11457731" containerID="0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292" exitCode=0 Jan 30 10:34:49 crc kubenswrapper[4984]: I0130 10:34:49.777884 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerDied","Data":"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292"} Jan 30 10:34:49 crc kubenswrapper[4984]: I0130 10:34:49.778171 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerStarted","Data":"af28661a5979a3c9094f4b828557ffcee62d82e99faaa0eff4776f3226c108b1"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.753623 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.787873 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.787951 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.787972 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.787986 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788019 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788105 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788153 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788194 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788243 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788274 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788338 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.789404 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.799904 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.801539 4984 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.801563 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.808617 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.829993 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.836784 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerStarted","Data":"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.845782 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.848759 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.850689 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v" (OuterVolumeSpecName: "kube-api-access-mzh7v") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "kube-api-access-mzh7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.854766 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info" (OuterVolumeSpecName: "pod-info") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.865221 4984 generic.go:334] "Generic (PLEG): container finished" podID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerID="53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a" exitCode=0 Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.865312 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerDied","Data":"53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.891841 4984 generic.go:334] "Generic (PLEG): container finished" podID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerID="9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" exitCode=0 Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.891894 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerDied","Data":"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.891927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerDied","Data":"bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.891946 4984 scope.go:117] "RemoveContainer" containerID="9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.892120 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.907532 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.907561 4984 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.909119 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.909145 4984 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.909159 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.909171 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.936789 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.939571 4984 scope.go:117] "RemoveContainer" containerID="f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.950771 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data" (OuterVolumeSpecName: "config-data") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.966524 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf" (OuterVolumeSpecName: "server-conf") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.991372 4984 scope.go:117] "RemoveContainer" containerID="9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" Jan 30 10:34:50 crc kubenswrapper[4984]: E0130 10:34:50.991890 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83\": container with ID starting with 9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83 not found: ID does not exist" containerID="9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.991921 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83"} err="failed to get container status \"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83\": rpc error: code = NotFound desc = could not find container \"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83\": container with ID starting with 9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83 not found: ID does not exist" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.991941 4984 scope.go:117] "RemoveContainer" containerID="f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48" Jan 30 10:34:50 crc kubenswrapper[4984]: E0130 10:34:50.992270 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48\": container with ID starting with f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48 not found: ID does not exist" containerID="f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.992287 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48"} err="failed to get container status \"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48\": rpc error: code = NotFound desc = could not find container \"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48\": container with ID starting with f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48 not found: ID does not exist" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.010609 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.010648 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.010659 4984 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.015778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.112746 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.149211 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214529 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214647 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214778 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214843 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214942 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214982 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215016 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215041 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215074 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215104 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215134 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.217658 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.218037 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.219752 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.220112 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.223827 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info" (OuterVolumeSpecName: "pod-info") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.225488 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.254148 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.259470 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj" (OuterVolumeSpecName: "kube-api-access-zl7rj") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "kube-api-access-zl7rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.270155 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.279203 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.308401 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf" (OuterVolumeSpecName: "server-conf") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320670 4984 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320708 4984 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320723 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320734 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320761 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320773 4984 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320785 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320796 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320810 4984 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.328764 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data" (OuterVolumeSpecName: "config-data") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356337 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: E0130 10:34:51.356846 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="setup-container" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356872 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="setup-container" Jan 30 10:34:51 crc kubenswrapper[4984]: E0130 10:34:51.356893 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356902 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: E0130 10:34:51.356923 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="setup-container" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356931 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="setup-container" Jan 30 10:34:51 crc kubenswrapper[4984]: E0130 10:34:51.356950 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356957 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.357187 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.357214 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.358469 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.367637 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.367790 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.367909 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.373681 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.381655 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.381924 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4bdkz" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.382203 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.383725 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.387658 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.422598 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.422941 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.422987 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423063 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423107 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423146 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-config-data\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423216 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423248 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7sxl\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-kube-api-access-k7sxl\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423319 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/137801a7-4625-4c4c-a855-8ecdf65e509a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423341 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/137801a7-4625-4c4c-a855-8ecdf65e509a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423364 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423422 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423437 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.489486 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531157 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-config-data\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531242 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531284 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7sxl\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-kube-api-access-k7sxl\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531322 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/137801a7-4625-4c4c-a855-8ecdf65e509a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531340 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/137801a7-4625-4c4c-a855-8ecdf65e509a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531360 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531386 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531401 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531452 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531497 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531524 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531585 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.532654 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.533117 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.533799 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-config-data\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.534574 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.535044 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.543228 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/137801a7-4625-4c4c-a855-8ecdf65e509a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.546751 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.554172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.556911 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/137801a7-4625-4c4c-a855-8ecdf65e509a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.566122 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.575139 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7sxl\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-kube-api-access-k7sxl\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.583240 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.642217 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.905054 4984 generic.go:334] "Generic (PLEG): container finished" podID="931ec9af-3161-478a-9f45-556b11457731" containerID="2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355" exitCode=0 Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.905283 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerDied","Data":"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355"} Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.907520 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.907608 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerDied","Data":"9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83"} Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.907673 4984 scope.go:117] "RemoveContainer" containerID="53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.932759 4984 scope.go:117] "RemoveContainer" containerID="627e3b8cc5def8235dcb65072da12abbb346c0ddb7f3ece2aa1c597e5e7a4e73" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.956802 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.967041 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.982798 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.984776 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.986678 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.986909 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.987087 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dmx9d" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.987349 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.988169 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.988355 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.988170 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.002965 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.041937 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qnrm\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-kube-api-access-4qnrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042007 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92837592-8d1a-4eec-9c06-1d906b4724c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042030 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042098 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042167 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042196 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042220 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042238 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042280 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92837592-8d1a-4eec-9c06-1d906b4724c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.100348 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" path="/var/lib/kubelet/pods/0e0c1fc2-7876-468d-86b8-7348a8418ee9/volumes" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.101084 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" path="/var/lib/kubelet/pods/6d00f70a-4071-4375-81f3-45e7aab83cd3/volumes" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.112357 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:52 crc kubenswrapper[4984]: W0130 10:34:52.117985 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137801a7_4625_4c4c_a855_8ecdf65e509a.slice/crio-042ac03ec8d6fd4af20feb36dfcacdca477a082bb6fdd99c1d9bc3be54bf4896 WatchSource:0}: Error finding container 042ac03ec8d6fd4af20feb36dfcacdca477a082bb6fdd99c1d9bc3be54bf4896: Status 404 returned error can't find the container with id 042ac03ec8d6fd4af20feb36dfcacdca477a082bb6fdd99c1d9bc3be54bf4896 Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.159080 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163357 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163454 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92837592-8d1a-4eec-9c06-1d906b4724c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163586 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qnrm\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-kube-api-access-4qnrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163756 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92837592-8d1a-4eec-9c06-1d906b4724c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163814 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164197 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164432 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164550 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164593 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164666 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.165193 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.165229 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.166923 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.167596 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.169093 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.169335 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.170130 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92837592-8d1a-4eec-9c06-1d906b4724c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.171003 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.176390 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92837592-8d1a-4eec-9c06-1d906b4724c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.183308 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qnrm\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-kube-api-access-4qnrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.187447 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.211096 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.306636 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.837810 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:53 crc kubenswrapper[4984]: W0130 10:34:52.845797 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92837592_8d1a_4eec_9c06_1d906b4724c2.slice/crio-ad1e9d89d2b9df0be98ba01e9a2e3fa694053763b4b79fc5b2cb76a074c5134f WatchSource:0}: Error finding container ad1e9d89d2b9df0be98ba01e9a2e3fa694053763b4b79fc5b2cb76a074c5134f: Status 404 returned error can't find the container with id ad1e9d89d2b9df0be98ba01e9a2e3fa694053763b4b79fc5b2cb76a074c5134f Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.924074 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92837592-8d1a-4eec-9c06-1d906b4724c2","Type":"ContainerStarted","Data":"ad1e9d89d2b9df0be98ba01e9a2e3fa694053763b4b79fc5b2cb76a074c5134f"} Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.925019 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"137801a7-4625-4c4c-a855-8ecdf65e509a","Type":"ContainerStarted","Data":"042ac03ec8d6fd4af20feb36dfcacdca477a082bb6fdd99c1d9bc3be54bf4896"} Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.965432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerStarted","Data":"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05"} Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.993841 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nc5p7" podStartSLOduration=2.076203017 podStartE2EDuration="4.993820454s" podCreationTimestamp="2026-01-30 10:34:48 +0000 UTC" firstStartedPulling="2026-01-30 10:34:49.779382159 +0000 UTC m=+1394.345685983" lastFinishedPulling="2026-01-30 10:34:52.696999596 +0000 UTC m=+1397.263303420" observedRunningTime="2026-01-30 10:34:52.982652676 +0000 UTC m=+1397.548956520" watchObservedRunningTime="2026-01-30 10:34:52.993820454 +0000 UTC m=+1397.560124278" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.459002 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.461042 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.462871 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.471116 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495303 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495377 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495403 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495471 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495612 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495694 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495725 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.597686 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598067 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598111 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598193 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598274 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598325 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599282 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599315 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599340 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599558 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599840 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599904 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.618134 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.775722 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:54 crc kubenswrapper[4984]: I0130 10:34:54.013208 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"137801a7-4625-4c4c-a855-8ecdf65e509a","Type":"ContainerStarted","Data":"33a1c0ab286308635ce2997101d493bc5bbaf8b33a44aa8dba473c0678633e74"} Jan 30 10:34:54 crc kubenswrapper[4984]: I0130 10:34:54.275291 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:34:55 crc kubenswrapper[4984]: I0130 10:34:55.019768 4984 generic.go:334] "Generic (PLEG): container finished" podID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerID="d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06" exitCode=0 Jan 30 10:34:55 crc kubenswrapper[4984]: I0130 10:34:55.019815 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerDied","Data":"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06"} Jan 30 10:34:55 crc kubenswrapper[4984]: I0130 10:34:55.020067 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerStarted","Data":"0046ce7289b143c1afa94a4ee5518f2e0cbb8f236b2ed7fb1318f74a0dfbd833"} Jan 30 10:34:55 crc kubenswrapper[4984]: I0130 10:34:55.021909 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92837592-8d1a-4eec-9c06-1d906b4724c2","Type":"ContainerStarted","Data":"61487bfc0e3e51e3e465639e925a3638c30e7ede1c3eb153d4f8715997633943"} Jan 30 10:34:56 crc kubenswrapper[4984]: I0130 10:34:56.034545 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerStarted","Data":"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b"} Jan 30 10:34:56 crc kubenswrapper[4984]: I0130 10:34:56.055177 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" podStartSLOduration=3.055158546 podStartE2EDuration="3.055158546s" podCreationTimestamp="2026-01-30 10:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:34:56.053637696 +0000 UTC m=+1400.619941520" watchObservedRunningTime="2026-01-30 10:34:56.055158546 +0000 UTC m=+1400.621462370" Jan 30 10:34:57 crc kubenswrapper[4984]: I0130 10:34:57.041750 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:58 crc kubenswrapper[4984]: I0130 10:34:58.575352 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:58 crc kubenswrapper[4984]: I0130 10:34:58.576799 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:59 crc kubenswrapper[4984]: I0130 10:34:59.628432 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nc5p7" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" probeResult="failure" output=< Jan 30 10:34:59 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:34:59 crc kubenswrapper[4984]: > Jan 30 10:35:03 crc kubenswrapper[4984]: I0130 10:35:03.777589 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:35:03 crc kubenswrapper[4984]: I0130 10:35:03.874677 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:35:03 crc kubenswrapper[4984]: I0130 10:35:03.875331 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="dnsmasq-dns" containerID="cri-o://e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1" gracePeriod=10 Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.056199 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-fvwt9"] Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.060207 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.124512 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-fvwt9"] Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.140517 4984 generic.go:334] "Generic (PLEG): container finished" podID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerID="e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1" exitCode=0 Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.140565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerDied","Data":"e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1"} Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230465 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2ts\" (UniqueName: \"kubernetes.io/projected/f3033afa-9ac2-4f32-a02d-372dcdbeb984-kube-api-access-tv2ts\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230528 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-config\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230566 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230608 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230640 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230729 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230760 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336244 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv2ts\" (UniqueName: \"kubernetes.io/projected/f3033afa-9ac2-4f32-a02d-372dcdbeb984-kube-api-access-tv2ts\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336763 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-config\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336793 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336843 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336891 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336976 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.337001 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.337942 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-config\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.338301 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.338758 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.341885 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.342773 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.343300 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.357079 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv2ts\" (UniqueName: \"kubernetes.io/projected/f3033afa-9ac2-4f32-a02d-372dcdbeb984-kube-api-access-tv2ts\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.389512 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.546439 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642021 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642100 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642162 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642300 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642358 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642387 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.654421 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b" (OuterVolumeSpecName: "kube-api-access-jld6b") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "kube-api-access-jld6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.744774 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.754850 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config" (OuterVolumeSpecName: "config") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.757024 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.763651 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.765338 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.766046 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846665 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846698 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846711 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846723 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846734 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.866136 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-fvwt9"] Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.154714 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerDied","Data":"3a743fd4af77fa8320a0aa82fc1ee65e702a095968f1a2be7dbc346d0b4f3fe2"} Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.155007 4984 scope.go:117] "RemoveContainer" containerID="e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1" Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.155641 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" event={"ID":"f3033afa-9ac2-4f32-a02d-372dcdbeb984","Type":"ContainerStarted","Data":"9ceacd3faf980f439c66b937024148d52844a30641fa1d51ecf875b71c842d50"} Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.156121 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.189108 4984 scope.go:117] "RemoveContainer" containerID="6f684411c439001a58a467c45183371d748f6a158f135c5dea4ecaa3e03b6d12" Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.207359 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.215143 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:35:06 crc kubenswrapper[4984]: I0130 10:35:06.101717 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" path="/var/lib/kubelet/pods/51b210b6-b9ff-41fd-b06b-77aca8956fb6/volumes" Jan 30 10:35:06 crc kubenswrapper[4984]: I0130 10:35:06.163773 4984 generic.go:334] "Generic (PLEG): container finished" podID="f3033afa-9ac2-4f32-a02d-372dcdbeb984" containerID="37aa86f993c23a31c3e9a4dd657a8ca12ffec17662ac630ff75cd5c42e30e5c1" exitCode=0 Jan 30 10:35:06 crc kubenswrapper[4984]: I0130 10:35:06.163838 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" event={"ID":"f3033afa-9ac2-4f32-a02d-372dcdbeb984","Type":"ContainerDied","Data":"37aa86f993c23a31c3e9a4dd657a8ca12ffec17662ac630ff75cd5c42e30e5c1"} Jan 30 10:35:07 crc kubenswrapper[4984]: I0130 10:35:07.177570 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" event={"ID":"f3033afa-9ac2-4f32-a02d-372dcdbeb984","Type":"ContainerStarted","Data":"42d0aecaafdd4bb804a7799dc5cd1267538271cfd08f0710c4b8d707a7fb9848"} Jan 30 10:35:07 crc kubenswrapper[4984]: I0130 10:35:07.178366 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:07 crc kubenswrapper[4984]: I0130 10:35:07.213861 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" podStartSLOduration=3.21384384 podStartE2EDuration="3.21384384s" podCreationTimestamp="2026-01-30 10:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:35:07.200774822 +0000 UTC m=+1411.767078656" watchObservedRunningTime="2026-01-30 10:35:07.21384384 +0000 UTC m=+1411.780147654" Jan 30 10:35:08 crc kubenswrapper[4984]: I0130 10:35:08.631093 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:35:08 crc kubenswrapper[4984]: I0130 10:35:08.691999 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:35:09 crc kubenswrapper[4984]: I0130 10:35:09.432573 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.201425 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nc5p7" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" containerID="cri-o://c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" gracePeriod=2 Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.641355 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.771349 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") pod \"931ec9af-3161-478a-9f45-556b11457731\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.771513 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") pod \"931ec9af-3161-478a-9f45-556b11457731\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.771611 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") pod \"931ec9af-3161-478a-9f45-556b11457731\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.772322 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities" (OuterVolumeSpecName: "utilities") pod "931ec9af-3161-478a-9f45-556b11457731" (UID: "931ec9af-3161-478a-9f45-556b11457731"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.777172 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v" (OuterVolumeSpecName: "kube-api-access-zfq5v") pod "931ec9af-3161-478a-9f45-556b11457731" (UID: "931ec9af-3161-478a-9f45-556b11457731"). InnerVolumeSpecName "kube-api-access-zfq5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.874757 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.874846 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.906460 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "931ec9af-3161-478a-9f45-556b11457731" (UID: "931ec9af-3161-478a-9f45-556b11457731"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.975790 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210403 4984 generic.go:334] "Generic (PLEG): container finished" podID="931ec9af-3161-478a-9f45-556b11457731" containerID="c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" exitCode=0 Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210487 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210512 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerDied","Data":"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05"} Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210846 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerDied","Data":"af28661a5979a3c9094f4b828557ffcee62d82e99faaa0eff4776f3226c108b1"} Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210864 4984 scope.go:117] "RemoveContainer" containerID="c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.237996 4984 scope.go:117] "RemoveContainer" containerID="2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.244873 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.254465 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.280695 4984 scope.go:117] "RemoveContainer" containerID="0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.299502 4984 scope.go:117] "RemoveContainer" containerID="c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" Jan 30 10:35:11 crc kubenswrapper[4984]: E0130 10:35:11.299914 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05\": container with ID starting with c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05 not found: ID does not exist" containerID="c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.299944 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05"} err="failed to get container status \"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05\": rpc error: code = NotFound desc = could not find container \"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05\": container with ID starting with c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05 not found: ID does not exist" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.299964 4984 scope.go:117] "RemoveContainer" containerID="2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355" Jan 30 10:35:11 crc kubenswrapper[4984]: E0130 10:35:11.300394 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355\": container with ID starting with 2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355 not found: ID does not exist" containerID="2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.300434 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355"} err="failed to get container status \"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355\": rpc error: code = NotFound desc = could not find container \"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355\": container with ID starting with 2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355 not found: ID does not exist" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.300475 4984 scope.go:117] "RemoveContainer" containerID="0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292" Jan 30 10:35:11 crc kubenswrapper[4984]: E0130 10:35:11.300812 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292\": container with ID starting with 0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292 not found: ID does not exist" containerID="0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.300837 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292"} err="failed to get container status \"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292\": rpc error: code = NotFound desc = could not find container \"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292\": container with ID starting with 0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292 not found: ID does not exist" Jan 30 10:35:12 crc kubenswrapper[4984]: I0130 10:35:12.110316 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="931ec9af-3161-478a-9f45-556b11457731" path="/var/lib/kubelet/pods/931ec9af-3161-478a-9f45-556b11457731/volumes" Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.391796 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.458654 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.459274 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="dnsmasq-dns" containerID="cri-o://f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" gracePeriod=10 Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.940241 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.947887 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948331 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948485 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948576 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948683 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948821 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.954493 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc" (OuterVolumeSpecName: "kube-api-access-jcvtc") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "kube-api-access-jcvtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.032806 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.051693 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.051728 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.061740 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.062817 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config" (OuterVolumeSpecName: "config") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.069890 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.073923 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.086181 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153581 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153634 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153646 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153656 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153668 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260722 4984 generic.go:334] "Generic (PLEG): container finished" podID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerID="f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" exitCode=0 Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260771 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerDied","Data":"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b"} Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260787 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260807 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerDied","Data":"0046ce7289b143c1afa94a4ee5518f2e0cbb8f236b2ed7fb1318f74a0dfbd833"} Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260826 4984 scope.go:117] "RemoveContainer" containerID="f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.284266 4984 scope.go:117] "RemoveContainer" containerID="d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.304707 4984 scope.go:117] "RemoveContainer" containerID="f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" Jan 30 10:35:15 crc kubenswrapper[4984]: E0130 10:35:15.305164 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b\": container with ID starting with f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b not found: ID does not exist" containerID="f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.305203 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b"} err="failed to get container status \"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b\": rpc error: code = NotFound desc = could not find container \"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b\": container with ID starting with f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b not found: ID does not exist" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.305224 4984 scope.go:117] "RemoveContainer" containerID="d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06" Jan 30 10:35:15 crc kubenswrapper[4984]: E0130 10:35:15.305638 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06\": container with ID starting with d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06 not found: ID does not exist" containerID="d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.305666 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06"} err="failed to get container status \"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06\": rpc error: code = NotFound desc = could not find container \"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06\": container with ID starting with d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06 not found: ID does not exist" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.314423 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.322421 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:35:16 crc kubenswrapper[4984]: I0130 10:35:16.101534 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" path="/var/lib/kubelet/pods/a2811735-b4c5-4d3a-9b00-4eca7a41aef5/volumes" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.462918 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn"] Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.463988 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464005 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464029 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464037 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464050 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="extract-utilities" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464058 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="extract-utilities" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464074 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="extract-content" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464084 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="extract-content" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464102 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464111 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464125 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="init" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464133 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="init" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464157 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="init" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464166 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="init" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464401 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464429 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464444 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.465163 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.468782 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.468891 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.469818 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.473780 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.489201 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn"] Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.651453 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.651540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.651601 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.651981 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.753906 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.753986 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.754040 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.754104 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.762962 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.762996 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.763447 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.773462 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.834955 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.013105 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn"] Jan 30 10:35:30 crc kubenswrapper[4984]: W0130 10:35:30.069189 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1985e15d_70be_4079_bd48_55c782dfcba7.slice/crio-86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397 WatchSource:0}: Error finding container 86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397: Status 404 returned error can't find the container with id 86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397 Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.428618 4984 generic.go:334] "Generic (PLEG): container finished" podID="92837592-8d1a-4eec-9c06-1d906b4724c2" containerID="61487bfc0e3e51e3e465639e925a3638c30e7ede1c3eb153d4f8715997633943" exitCode=0 Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.428712 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92837592-8d1a-4eec-9c06-1d906b4724c2","Type":"ContainerDied","Data":"61487bfc0e3e51e3e465639e925a3638c30e7ede1c3eb153d4f8715997633943"} Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.431687 4984 generic.go:334] "Generic (PLEG): container finished" podID="137801a7-4625-4c4c-a855-8ecdf65e509a" containerID="33a1c0ab286308635ce2997101d493bc5bbaf8b33a44aa8dba473c0678633e74" exitCode=0 Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.431759 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"137801a7-4625-4c4c-a855-8ecdf65e509a","Type":"ContainerDied","Data":"33a1c0ab286308635ce2997101d493bc5bbaf8b33a44aa8dba473c0678633e74"} Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.435552 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" event={"ID":"1985e15d-70be-4079-bd48-55c782dfcba7","Type":"ContainerStarted","Data":"86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397"} Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.454109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92837592-8d1a-4eec-9c06-1d906b4724c2","Type":"ContainerStarted","Data":"accaeb30930aec0aaf9006f5c8caa26e36b661b3c76d850da365db7a9c9e871a"} Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.454630 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.457457 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"137801a7-4625-4c4c-a855-8ecdf65e509a","Type":"ContainerStarted","Data":"69628bb286ef87e652ac2a788d656591ea56a3d42903290ccd45df4bcf0ac19b"} Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.457676 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.500782 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.500759112 podStartE2EDuration="40.500759112s" podCreationTimestamp="2026-01-30 10:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:35:31.482561577 +0000 UTC m=+1436.048865431" watchObservedRunningTime="2026-01-30 10:35:31.500759112 +0000 UTC m=+1436.067062956" Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.517823 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.517805657 podStartE2EDuration="40.517805657s" podCreationTimestamp="2026-01-30 10:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:35:31.514313014 +0000 UTC m=+1436.080616868" watchObservedRunningTime="2026-01-30 10:35:31.517805657 +0000 UTC m=+1436.084109481" Jan 30 10:35:40 crc kubenswrapper[4984]: I0130 10:35:40.532361 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:35:41 crc kubenswrapper[4984]: I0130 10:35:41.584031 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" event={"ID":"1985e15d-70be-4079-bd48-55c782dfcba7","Type":"ContainerStarted","Data":"7684994114ebf07034b9de0091e7ecb9aa3b24abaad446ba483247afd6df4c20"} Jan 30 10:35:41 crc kubenswrapper[4984]: I0130 10:35:41.646717 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="137801a7-4625-4c4c-a855-8ecdf65e509a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.210:5671: connect: connection refused" Jan 30 10:35:42 crc kubenswrapper[4984]: I0130 10:35:42.311485 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:35:42 crc kubenswrapper[4984]: I0130 10:35:42.356217 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" podStartSLOduration=4.900454605 podStartE2EDuration="15.356162775s" podCreationTimestamp="2026-01-30 10:35:27 +0000 UTC" firstStartedPulling="2026-01-30 10:35:30.074016649 +0000 UTC m=+1434.640320483" lastFinishedPulling="2026-01-30 10:35:40.529724819 +0000 UTC m=+1445.096028653" observedRunningTime="2026-01-30 10:35:41.600779883 +0000 UTC m=+1446.167083757" watchObservedRunningTime="2026-01-30 10:35:42.356162775 +0000 UTC m=+1446.922466639" Jan 30 10:35:51 crc kubenswrapper[4984]: I0130 10:35:51.647622 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 10:35:52 crc kubenswrapper[4984]: I0130 10:35:52.728161 4984 generic.go:334] "Generic (PLEG): container finished" podID="1985e15d-70be-4079-bd48-55c782dfcba7" containerID="7684994114ebf07034b9de0091e7ecb9aa3b24abaad446ba483247afd6df4c20" exitCode=0 Jan 30 10:35:52 crc kubenswrapper[4984]: I0130 10:35:52.728219 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" event={"ID":"1985e15d-70be-4079-bd48-55c782dfcba7","Type":"ContainerDied","Data":"7684994114ebf07034b9de0091e7ecb9aa3b24abaad446ba483247afd6df4c20"} Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.141755 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.225939 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") pod \"1985e15d-70be-4079-bd48-55c782dfcba7\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.226163 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") pod \"1985e15d-70be-4079-bd48-55c782dfcba7\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.226311 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") pod \"1985e15d-70be-4079-bd48-55c782dfcba7\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.226385 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") pod \"1985e15d-70be-4079-bd48-55c782dfcba7\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.231760 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1985e15d-70be-4079-bd48-55c782dfcba7" (UID: "1985e15d-70be-4079-bd48-55c782dfcba7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.232649 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds" (OuterVolumeSpecName: "kube-api-access-gqkds") pod "1985e15d-70be-4079-bd48-55c782dfcba7" (UID: "1985e15d-70be-4079-bd48-55c782dfcba7"). InnerVolumeSpecName "kube-api-access-gqkds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.254301 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory" (OuterVolumeSpecName: "inventory") pod "1985e15d-70be-4079-bd48-55c782dfcba7" (UID: "1985e15d-70be-4079-bd48-55c782dfcba7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.259805 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1985e15d-70be-4079-bd48-55c782dfcba7" (UID: "1985e15d-70be-4079-bd48-55c782dfcba7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.328855 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.328899 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.328913 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.328924 4984 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.754666 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" event={"ID":"1985e15d-70be-4079-bd48-55c782dfcba7","Type":"ContainerDied","Data":"86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397"} Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.754727 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.754725 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.949927 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4"] Jan 30 10:35:54 crc kubenswrapper[4984]: E0130 10:35:54.953840 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1985e15d-70be-4079-bd48-55c782dfcba7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.953883 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1985e15d-70be-4079-bd48-55c782dfcba7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.957218 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1985e15d-70be-4079-bd48-55c782dfcba7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.959850 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.967520 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.968379 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.968774 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.969073 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.009689 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4"] Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.043509 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.043671 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.043782 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.145323 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.145612 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.145804 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.152881 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.154020 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.165057 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.301731 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.900032 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4"] Jan 30 10:35:56 crc kubenswrapper[4984]: I0130 10:35:56.775926 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" event={"ID":"049a948c-1945-4217-b728-7f39570dd740","Type":"ContainerStarted","Data":"43d6f20b7d94869a049bfbbe2b3900cd352cb9cfb4814db14cc3a9ceccbc6f5d"} Jan 30 10:35:56 crc kubenswrapper[4984]: I0130 10:35:56.776565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" event={"ID":"049a948c-1945-4217-b728-7f39570dd740","Type":"ContainerStarted","Data":"ca21821b934d788c6c0b3f7fcbb2572c6288372c82bc0fef0f5bda691acc9c68"} Jan 30 10:35:56 crc kubenswrapper[4984]: I0130 10:35:56.800672 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" podStartSLOduration=2.384750414 podStartE2EDuration="2.80065514s" podCreationTimestamp="2026-01-30 10:35:54 +0000 UTC" firstStartedPulling="2026-01-30 10:35:55.902698124 +0000 UTC m=+1460.469001988" lastFinishedPulling="2026-01-30 10:35:56.31860288 +0000 UTC m=+1460.884906714" observedRunningTime="2026-01-30 10:35:56.793422627 +0000 UTC m=+1461.359726451" watchObservedRunningTime="2026-01-30 10:35:56.80065514 +0000 UTC m=+1461.366958964" Jan 30 10:35:59 crc kubenswrapper[4984]: I0130 10:35:59.819614 4984 generic.go:334] "Generic (PLEG): container finished" podID="049a948c-1945-4217-b728-7f39570dd740" containerID="43d6f20b7d94869a049bfbbe2b3900cd352cb9cfb4814db14cc3a9ceccbc6f5d" exitCode=0 Jan 30 10:35:59 crc kubenswrapper[4984]: I0130 10:35:59.819711 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" event={"ID":"049a948c-1945-4217-b728-7f39570dd740","Type":"ContainerDied","Data":"43d6f20b7d94869a049bfbbe2b3900cd352cb9cfb4814db14cc3a9ceccbc6f5d"} Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.253987 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.378371 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") pod \"049a948c-1945-4217-b728-7f39570dd740\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.378698 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") pod \"049a948c-1945-4217-b728-7f39570dd740\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.378759 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") pod \"049a948c-1945-4217-b728-7f39570dd740\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.384881 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq" (OuterVolumeSpecName: "kube-api-access-d4jcq") pod "049a948c-1945-4217-b728-7f39570dd740" (UID: "049a948c-1945-4217-b728-7f39570dd740"). InnerVolumeSpecName "kube-api-access-d4jcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.404013 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory" (OuterVolumeSpecName: "inventory") pod "049a948c-1945-4217-b728-7f39570dd740" (UID: "049a948c-1945-4217-b728-7f39570dd740"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.413945 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "049a948c-1945-4217-b728-7f39570dd740" (UID: "049a948c-1945-4217-b728-7f39570dd740"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.480944 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.480999 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.481013 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") on node \"crc\" DevicePath \"\"" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.853965 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" event={"ID":"049a948c-1945-4217-b728-7f39570dd740","Type":"ContainerDied","Data":"ca21821b934d788c6c0b3f7fcbb2572c6288372c82bc0fef0f5bda691acc9c68"} Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.854590 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca21821b934d788c6c0b3f7fcbb2572c6288372c82bc0fef0f5bda691acc9c68" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.854032 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.937879 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd"] Jan 30 10:36:01 crc kubenswrapper[4984]: E0130 10:36:01.938479 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049a948c-1945-4217-b728-7f39570dd740" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.938510 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="049a948c-1945-4217-b728-7f39570dd740" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.938913 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="049a948c-1945-4217-b728-7f39570dd740" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.939984 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.943077 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.943520 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.943776 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.943869 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.965814 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd"] Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.091770 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.091864 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.092021 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.092101 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.193651 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.193709 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.193819 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.193857 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.203025 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.204855 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.226667 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.227152 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.268080 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.821210 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd"] Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.867912 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" event={"ID":"ba20d4a0-7acc-4813-8fa9-6f166802bd04","Type":"ContainerStarted","Data":"01095e6c934c70dece32c6771158b8523cd2829d8ac02b3a74bf3162a5e9cb66"} Jan 30 10:36:04 crc kubenswrapper[4984]: I0130 10:36:04.892589 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" event={"ID":"ba20d4a0-7acc-4813-8fa9-6f166802bd04","Type":"ContainerStarted","Data":"0b66e49635fc5069d1019f6c1c3bef672bc077a1480acff08d68c7fbea15f904"} Jan 30 10:36:04 crc kubenswrapper[4984]: I0130 10:36:04.922326 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" podStartSLOduration=3.17374668 podStartE2EDuration="3.922229729s" podCreationTimestamp="2026-01-30 10:36:01 +0000 UTC" firstStartedPulling="2026-01-30 10:36:02.829625382 +0000 UTC m=+1467.395929226" lastFinishedPulling="2026-01-30 10:36:03.578108441 +0000 UTC m=+1468.144412275" observedRunningTime="2026-01-30 10:36:04.910412154 +0000 UTC m=+1469.476715978" watchObservedRunningTime="2026-01-30 10:36:04.922229729 +0000 UTC m=+1469.488533583" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.674087 4984 scope.go:117] "RemoveContainer" containerID="0789f4290dbcaeca5700757294aca052563ba0644765c2738bb82c817de460e2" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.717854 4984 scope.go:117] "RemoveContainer" containerID="f92bcc7f529c6d27eac4218b5f51170e776604565fbe8022a9769f8c3f32b9e1" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.762943 4984 scope.go:117] "RemoveContainer" containerID="ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.819942 4984 scope.go:117] "RemoveContainer" containerID="f04515d06093bea0006457a33fcd2dff143369d8a73d4cfd520b13fb1b93624f" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.843538 4984 scope.go:117] "RemoveContainer" containerID="9ee1ed553aa82f1b58b8003898aacd5f65036545ee6400dc5b131af172873423" Jan 30 10:36:33 crc kubenswrapper[4984]: I0130 10:36:33.001404 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:36:33 crc kubenswrapper[4984]: I0130 10:36:33.002460 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:37:03 crc kubenswrapper[4984]: I0130 10:37:03.001001 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:37:03 crc kubenswrapper[4984]: I0130 10:37:03.001648 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:37:20 crc kubenswrapper[4984]: I0130 10:37:20.979143 4984 scope.go:117] "RemoveContainer" containerID="82eaf4a70aa0bb3862fc793e3843dbf8a715aef600755d897602de67f43a4990" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.021614 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.024646 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.037735 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.050409 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.050463 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.050569 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.152366 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.152556 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.152588 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.153000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.153140 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.178950 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.368046 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.909628 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:24 crc kubenswrapper[4984]: I0130 10:37:24.812582 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerDied","Data":"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf"} Jan 30 10:37:24 crc kubenswrapper[4984]: I0130 10:37:24.812327 4984 generic.go:334] "Generic (PLEG): container finished" podID="e7389f74-bc2d-4232-921b-527c824b7753" containerID="fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf" exitCode=0 Jan 30 10:37:24 crc kubenswrapper[4984]: I0130 10:37:24.814418 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerStarted","Data":"cf0415faef1337e90f74baefa9777e03b755fa05e81ba321a56d6e1ded44938a"} Jan 30 10:37:25 crc kubenswrapper[4984]: I0130 10:37:25.826028 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerStarted","Data":"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35"} Jan 30 10:37:26 crc kubenswrapper[4984]: I0130 10:37:26.887870 4984 generic.go:334] "Generic (PLEG): container finished" podID="e7389f74-bc2d-4232-921b-527c824b7753" containerID="94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35" exitCode=0 Jan 30 10:37:26 crc kubenswrapper[4984]: I0130 10:37:26.887957 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerDied","Data":"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35"} Jan 30 10:37:27 crc kubenswrapper[4984]: I0130 10:37:27.898598 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerStarted","Data":"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d"} Jan 30 10:37:27 crc kubenswrapper[4984]: I0130 10:37:27.928723 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-skxsz" podStartSLOduration=3.44156766 podStartE2EDuration="5.928703442s" podCreationTimestamp="2026-01-30 10:37:22 +0000 UTC" firstStartedPulling="2026-01-30 10:37:24.815210179 +0000 UTC m=+1549.381514033" lastFinishedPulling="2026-01-30 10:37:27.302345951 +0000 UTC m=+1551.868649815" observedRunningTime="2026-01-30 10:37:27.925813774 +0000 UTC m=+1552.492117598" watchObservedRunningTime="2026-01-30 10:37:27.928703442 +0000 UTC m=+1552.495007266" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.000649 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.001375 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.001430 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.003020 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.003101 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" gracePeriod=600 Jan 30 10:37:33 crc kubenswrapper[4984]: E0130 10:37:33.144234 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.368766 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.369002 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.443091 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.967386 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" exitCode=0 Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.967458 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a"} Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.967526 4984 scope.go:117] "RemoveContainer" containerID="43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.968520 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:37:33 crc kubenswrapper[4984]: E0130 10:37:33.968930 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:37:34 crc kubenswrapper[4984]: I0130 10:37:34.055521 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:34 crc kubenswrapper[4984]: I0130 10:37:34.106227 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.007051 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-skxsz" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="registry-server" containerID="cri-o://f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" gracePeriod=2 Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.518773 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.689935 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") pod \"e7389f74-bc2d-4232-921b-527c824b7753\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.690107 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") pod \"e7389f74-bc2d-4232-921b-527c824b7753\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.690183 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") pod \"e7389f74-bc2d-4232-921b-527c824b7753\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.691921 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities" (OuterVolumeSpecName: "utilities") pod "e7389f74-bc2d-4232-921b-527c824b7753" (UID: "e7389f74-bc2d-4232-921b-527c824b7753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.697824 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5" (OuterVolumeSpecName: "kube-api-access-qnsd5") pod "e7389f74-bc2d-4232-921b-527c824b7753" (UID: "e7389f74-bc2d-4232-921b-527c824b7753"). InnerVolumeSpecName "kube-api-access-qnsd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.715831 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7389f74-bc2d-4232-921b-527c824b7753" (UID: "e7389f74-bc2d-4232-921b-527c824b7753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.792622 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.792667 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.792677 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") on node \"crc\" DevicePath \"\"" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016067 4984 generic.go:334] "Generic (PLEG): container finished" podID="e7389f74-bc2d-4232-921b-527c824b7753" containerID="f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" exitCode=0 Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016124 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerDied","Data":"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d"} Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016455 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerDied","Data":"cf0415faef1337e90f74baefa9777e03b755fa05e81ba321a56d6e1ded44938a"} Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016134 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016475 4984 scope.go:117] "RemoveContainer" containerID="f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.035216 4984 scope.go:117] "RemoveContainer" containerID="94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.056964 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.059974 4984 scope.go:117] "RemoveContainer" containerID="fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.067312 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.128165 4984 scope.go:117] "RemoveContainer" containerID="f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" Jan 30 10:37:37 crc kubenswrapper[4984]: E0130 10:37:37.128673 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d\": container with ID starting with f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d not found: ID does not exist" containerID="f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.128713 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d"} err="failed to get container status \"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d\": rpc error: code = NotFound desc = could not find container \"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d\": container with ID starting with f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d not found: ID does not exist" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.128767 4984 scope.go:117] "RemoveContainer" containerID="94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35" Jan 30 10:37:37 crc kubenswrapper[4984]: E0130 10:37:37.129159 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35\": container with ID starting with 94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35 not found: ID does not exist" containerID="94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.129200 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35"} err="failed to get container status \"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35\": rpc error: code = NotFound desc = could not find container \"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35\": container with ID starting with 94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35 not found: ID does not exist" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.129227 4984 scope.go:117] "RemoveContainer" containerID="fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf" Jan 30 10:37:37 crc kubenswrapper[4984]: E0130 10:37:37.129608 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf\": container with ID starting with fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf not found: ID does not exist" containerID="fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.129666 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf"} err="failed to get container status \"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf\": rpc error: code = NotFound desc = could not find container \"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf\": container with ID starting with fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf not found: ID does not exist" Jan 30 10:37:38 crc kubenswrapper[4984]: I0130 10:37:38.108823 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7389f74-bc2d-4232-921b-527c824b7753" path="/var/lib/kubelet/pods/e7389f74-bc2d-4232-921b-527c824b7753/volumes" Jan 30 10:37:48 crc kubenswrapper[4984]: I0130 10:37:48.091124 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:37:48 crc kubenswrapper[4984]: E0130 10:37:48.092030 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:37:59 crc kubenswrapper[4984]: I0130 10:37:59.090455 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:37:59 crc kubenswrapper[4984]: E0130 10:37:59.091385 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:38:11 crc kubenswrapper[4984]: I0130 10:38:11.091066 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:38:11 crc kubenswrapper[4984]: E0130 10:38:11.092804 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:38:21 crc kubenswrapper[4984]: I0130 10:38:21.069651 4984 scope.go:117] "RemoveContainer" containerID="96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00" Jan 30 10:38:21 crc kubenswrapper[4984]: I0130 10:38:21.113591 4984 scope.go:117] "RemoveContainer" containerID="3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca" Jan 30 10:38:21 crc kubenswrapper[4984]: I0130 10:38:21.150402 4984 scope.go:117] "RemoveContainer" containerID="9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b" Jan 30 10:38:21 crc kubenswrapper[4984]: I0130 10:38:21.180373 4984 scope.go:117] "RemoveContainer" containerID="3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631" Jan 30 10:38:24 crc kubenswrapper[4984]: I0130 10:38:24.089941 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:38:24 crc kubenswrapper[4984]: E0130 10:38:24.090211 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:38:35 crc kubenswrapper[4984]: I0130 10:38:35.090393 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:38:35 crc kubenswrapper[4984]: E0130 10:38:35.091415 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:38:50 crc kubenswrapper[4984]: I0130 10:38:50.090557 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:38:50 crc kubenswrapper[4984]: E0130 10:38:50.091590 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:03 crc kubenswrapper[4984]: I0130 10:39:03.090695 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:03 crc kubenswrapper[4984]: E0130 10:39:03.091520 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:07 crc kubenswrapper[4984]: I0130 10:39:07.979180 4984 generic.go:334] "Generic (PLEG): container finished" podID="ba20d4a0-7acc-4813-8fa9-6f166802bd04" containerID="0b66e49635fc5069d1019f6c1c3bef672bc077a1480acff08d68c7fbea15f904" exitCode=0 Jan 30 10:39:07 crc kubenswrapper[4984]: I0130 10:39:07.979290 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" event={"ID":"ba20d4a0-7acc-4813-8fa9-6f166802bd04","Type":"ContainerDied","Data":"0b66e49635fc5069d1019f6c1c3bef672bc077a1480acff08d68c7fbea15f904"} Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.496294 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.627773 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") pod \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.627888 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") pod \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.628026 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") pod \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.628148 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") pod \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.633555 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ba20d4a0-7acc-4813-8fa9-6f166802bd04" (UID: "ba20d4a0-7acc-4813-8fa9-6f166802bd04"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.634033 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2" (OuterVolumeSpecName: "kube-api-access-2xfl2") pod "ba20d4a0-7acc-4813-8fa9-6f166802bd04" (UID: "ba20d4a0-7acc-4813-8fa9-6f166802bd04"). InnerVolumeSpecName "kube-api-access-2xfl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.659592 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory" (OuterVolumeSpecName: "inventory") pod "ba20d4a0-7acc-4813-8fa9-6f166802bd04" (UID: "ba20d4a0-7acc-4813-8fa9-6f166802bd04"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.660268 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ba20d4a0-7acc-4813-8fa9-6f166802bd04" (UID: "ba20d4a0-7acc-4813-8fa9-6f166802bd04"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.730678 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.730720 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") on node \"crc\" DevicePath \"\"" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.730731 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.730741 4984 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.999290 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" event={"ID":"ba20d4a0-7acc-4813-8fa9-6f166802bd04","Type":"ContainerDied","Data":"01095e6c934c70dece32c6771158b8523cd2829d8ac02b3a74bf3162a5e9cb66"} Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.999324 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.999336 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01095e6c934c70dece32c6771158b8523cd2829d8ac02b3a74bf3162a5e9cb66" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.106898 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7"] Jan 30 10:39:10 crc kubenswrapper[4984]: E0130 10:39:10.107407 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="extract-utilities" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107433 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="extract-utilities" Jan 30 10:39:10 crc kubenswrapper[4984]: E0130 10:39:10.107452 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba20d4a0-7acc-4813-8fa9-6f166802bd04" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107465 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba20d4a0-7acc-4813-8fa9-6f166802bd04" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 10:39:10 crc kubenswrapper[4984]: E0130 10:39:10.107495 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="extract-content" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107504 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="extract-content" Jan 30 10:39:10 crc kubenswrapper[4984]: E0130 10:39:10.107528 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="registry-server" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107535 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="registry-server" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107757 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba20d4a0-7acc-4813-8fa9-6f166802bd04" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107783 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="registry-server" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.108559 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.111363 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.111465 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.111718 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.112166 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.116505 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7"] Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.242859 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.242942 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.244141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.346503 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.346655 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.347450 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.354018 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.357204 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.367609 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.431072 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:11 crc kubenswrapper[4984]: I0130 10:39:11.035584 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7"] Jan 30 10:39:11 crc kubenswrapper[4984]: W0130 10:39:11.040430 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8414dabf_1fa1_4a4c_8db5_55ef7397164d.slice/crio-fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62 WatchSource:0}: Error finding container fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62: Status 404 returned error can't find the container with id fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62 Jan 30 10:39:11 crc kubenswrapper[4984]: I0130 10:39:11.044964 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:39:12 crc kubenswrapper[4984]: I0130 10:39:12.017886 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" event={"ID":"8414dabf-1fa1-4a4c-8db5-55ef7397164d","Type":"ContainerStarted","Data":"82ecbb803ee39720407c4c93639d911fbdd5cca24aaa0b709401e13cd1f3ac74"} Jan 30 10:39:12 crc kubenswrapper[4984]: I0130 10:39:12.018412 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" event={"ID":"8414dabf-1fa1-4a4c-8db5-55ef7397164d","Type":"ContainerStarted","Data":"fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62"} Jan 30 10:39:12 crc kubenswrapper[4984]: I0130 10:39:12.039974 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" podStartSLOduration=1.663479111 podStartE2EDuration="2.039951623s" podCreationTimestamp="2026-01-30 10:39:10 +0000 UTC" firstStartedPulling="2026-01-30 10:39:11.04467193 +0000 UTC m=+1655.610975764" lastFinishedPulling="2026-01-30 10:39:11.421144442 +0000 UTC m=+1655.987448276" observedRunningTime="2026-01-30 10:39:12.0335786 +0000 UTC m=+1656.599882424" watchObservedRunningTime="2026-01-30 10:39:12.039951623 +0000 UTC m=+1656.606255447" Jan 30 10:39:14 crc kubenswrapper[4984]: I0130 10:39:14.090905 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:14 crc kubenswrapper[4984]: E0130 10:39:14.091436 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.267896 4984 scope.go:117] "RemoveContainer" containerID="5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.309069 4984 scope.go:117] "RemoveContainer" containerID="6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.346890 4984 scope.go:117] "RemoveContainer" containerID="295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.380142 4984 scope.go:117] "RemoveContainer" containerID="26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.411173 4984 scope.go:117] "RemoveContainer" containerID="e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.438421 4984 scope.go:117] "RemoveContainer" containerID="d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.455797 4984 scope.go:117] "RemoveContainer" containerID="3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.479896 4984 scope.go:117] "RemoveContainer" containerID="33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce" Jan 30 10:39:29 crc kubenswrapper[4984]: I0130 10:39:29.091532 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:29 crc kubenswrapper[4984]: E0130 10:39:29.092904 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.054919 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.066236 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.078760 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.089176 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.102995 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" path="/var/lib/kubelet/pods/dd118357-c4bf-43ef-a738-9fcd6b07aac4/volumes" Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.103840 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e83eb734-fae0-40ac-85db-8f8c8fb26133" path="/var/lib/kubelet/pods/e83eb734-fae0-40ac-85db-8f8c8fb26133/volumes" Jan 30 10:39:31 crc kubenswrapper[4984]: I0130 10:39:31.033649 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:39:31 crc kubenswrapper[4984]: I0130 10:39:31.044369 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:39:31 crc kubenswrapper[4984]: I0130 10:39:31.053134 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:39:31 crc kubenswrapper[4984]: I0130 10:39:31.060554 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:39:32 crc kubenswrapper[4984]: I0130 10:39:32.102051 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" path="/var/lib/kubelet/pods/0dd7bd77-9e19-4ad1-9711-e0290f74afa8/volumes" Jan 30 10:39:32 crc kubenswrapper[4984]: I0130 10:39:32.102673 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c89dde7-c492-44dd-b36c-571540039b30" path="/var/lib/kubelet/pods/4c89dde7-c492-44dd-b36c-571540039b30/volumes" Jan 30 10:39:37 crc kubenswrapper[4984]: I0130 10:39:37.042902 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:39:37 crc kubenswrapper[4984]: I0130 10:39:37.060529 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:39:37 crc kubenswrapper[4984]: I0130 10:39:37.069019 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:39:37 crc kubenswrapper[4984]: I0130 10:39:37.076053 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:39:38 crc kubenswrapper[4984]: I0130 10:39:38.108168 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" path="/var/lib/kubelet/pods/83c0dd46-b897-468f-87a0-a335dd8fd6d5/volumes" Jan 30 10:39:38 crc kubenswrapper[4984]: I0130 10:39:38.109952 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849571b4-26bb-4853-af9c-f717967dea41" path="/var/lib/kubelet/pods/849571b4-26bb-4853-af9c-f717967dea41/volumes" Jan 30 10:39:40 crc kubenswrapper[4984]: I0130 10:39:40.090356 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:40 crc kubenswrapper[4984]: E0130 10:39:40.090747 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:54 crc kubenswrapper[4984]: I0130 10:39:54.090308 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:54 crc kubenswrapper[4984]: E0130 10:39:54.091112 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.043761 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.053330 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.063842 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.072303 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.080918 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.089220 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.103666 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" path="/var/lib/kubelet/pods/1c6c0cd3-99cd-454e-8ceb-000141c59c2b/volumes" Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.104891 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d291ef2c-2cdb-47be-b508-efd4c8282791" path="/var/lib/kubelet/pods/d291ef2c-2cdb-47be-b508-efd4c8282791/volumes" Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.105727 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.107567 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:39:59 crc kubenswrapper[4984]: I0130 10:39:59.030744 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:39:59 crc kubenswrapper[4984]: I0130 10:39:59.041295 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:39:59 crc kubenswrapper[4984]: I0130 10:39:59.051936 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:39:59 crc kubenswrapper[4984]: I0130 10:39:59.062752 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:40:00 crc kubenswrapper[4984]: I0130 10:40:00.105665 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26267a37-c8e7-45b3-af7f-8050a58cb697" path="/var/lib/kubelet/pods/26267a37-c8e7-45b3-af7f-8050a58cb697/volumes" Jan 30 10:40:00 crc kubenswrapper[4984]: I0130 10:40:00.107966 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" path="/var/lib/kubelet/pods/341b21ee-dc5c-48f9-9810-85d1af9b9de9/volumes" Jan 30 10:40:00 crc kubenswrapper[4984]: I0130 10:40:00.109241 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" path="/var/lib/kubelet/pods/c4f293b1-64af-45c3-8ee1-b8df7efdde3e/volumes" Jan 30 10:40:00 crc kubenswrapper[4984]: I0130 10:40:00.110497 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5490b62-8700-4c9c-b4f7-517c71f91c46" path="/var/lib/kubelet/pods/f5490b62-8700-4c9c-b4f7-517c71f91c46/volumes" Jan 30 10:40:02 crc kubenswrapper[4984]: I0130 10:40:02.045558 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:40:02 crc kubenswrapper[4984]: I0130 10:40:02.054315 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:40:02 crc kubenswrapper[4984]: I0130 10:40:02.100091 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfce8525-20d3-4c57-9638-37a46571c375" path="/var/lib/kubelet/pods/bfce8525-20d3-4c57-9638-37a46571c375/volumes" Jan 30 10:40:06 crc kubenswrapper[4984]: I0130 10:40:06.096688 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:40:06 crc kubenswrapper[4984]: E0130 10:40:06.097632 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:40:18 crc kubenswrapper[4984]: I0130 10:40:18.090866 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:40:18 crc kubenswrapper[4984]: E0130 10:40:18.092300 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.554666 4984 scope.go:117] "RemoveContainer" containerID="990b9baffd84708013a7a3ee4aa2247425d308cfa8107b4fdee81cf4fe0b11dc" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.596550 4984 scope.go:117] "RemoveContainer" containerID="6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.674856 4984 scope.go:117] "RemoveContainer" containerID="2c715bd7c478626b0d30f0dcbe5f0fa4d9ddd3cebe540358d60fefd03ffbea4f" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.702538 4984 scope.go:117] "RemoveContainer" containerID="498b8053d9b5e3a916ccebe1006886752c4f6d7609924037fb495a51da3787d5" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.746078 4984 scope.go:117] "RemoveContainer" containerID="b2c5eedb1976c1f88ba872ebef95c16d2cb8d47db5e197de1d5f09d25aea4f90" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.796641 4984 scope.go:117] "RemoveContainer" containerID="b43c0631539e8d8618d4ae2280e84e9cef0ad9ab61a9b8d7dfd994b58ac2994b" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.841719 4984 scope.go:117] "RemoveContainer" containerID="f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.887022 4984 scope.go:117] "RemoveContainer" containerID="f9f5f71df6bcff6e848630eab001a1a161d02735319888972af7604f9aa242ac" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.917220 4984 scope.go:117] "RemoveContainer" containerID="65086de31b1aa439689527681ff638af7559dadfbbbe7fd2e976641d2933b6ce" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.945238 4984 scope.go:117] "RemoveContainer" containerID="70e9112a74a7aadc96357a6c30b6f274f66b33e88559a27a17cb48d3251c7fbb" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.973120 4984 scope.go:117] "RemoveContainer" containerID="e4a188d3d377fd9a910224b46c8bfca036c469e31163b866035741aa0bc79a21" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.991637 4984 scope.go:117] "RemoveContainer" containerID="636c0d411532393965dbc0c85c0755158f7ef4a0555bad562fe1e96ce9c7b1be" Jan 30 10:40:22 crc kubenswrapper[4984]: I0130 10:40:22.014824 4984 scope.go:117] "RemoveContainer" containerID="4e36e53c2881a6f73654429fc80824078411a297a7acc1ff57eb163eb773e0f9" Jan 30 10:40:22 crc kubenswrapper[4984]: I0130 10:40:22.041806 4984 scope.go:117] "RemoveContainer" containerID="73182d3db897a608122b23320455311eced5f1e7bb5cd0d6aaf0f4d8d9abd5cb" Jan 30 10:40:22 crc kubenswrapper[4984]: I0130 10:40:22.066795 4984 scope.go:117] "RemoveContainer" containerID="3be32fd131009048bc81a0d4461ef13892f209f53fa5bcf3e5c232baa45cfcc2" Jan 30 10:40:30 crc kubenswrapper[4984]: I0130 10:40:30.054650 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:40:30 crc kubenswrapper[4984]: I0130 10:40:30.068893 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:40:30 crc kubenswrapper[4984]: I0130 10:40:30.108184 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" path="/var/lib/kubelet/pods/ca9a5e83-0bd4-4550-a3c9-e297cc831e99/volumes" Jan 30 10:40:32 crc kubenswrapper[4984]: I0130 10:40:32.089862 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:40:32 crc kubenswrapper[4984]: E0130 10:40:32.090398 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:40:36 crc kubenswrapper[4984]: I0130 10:40:36.028641 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:40:36 crc kubenswrapper[4984]: I0130 10:40:36.036821 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:40:36 crc kubenswrapper[4984]: I0130 10:40:36.111079 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" path="/var/lib/kubelet/pods/58c1d730-34f1-4912-a0e9-f19d10e9ec9b/volumes" Jan 30 10:40:45 crc kubenswrapper[4984]: I0130 10:40:45.090632 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:40:45 crc kubenswrapper[4984]: E0130 10:40:45.091420 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:00 crc kubenswrapper[4984]: I0130 10:41:00.090307 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:00 crc kubenswrapper[4984]: E0130 10:41:00.091107 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:12 crc kubenswrapper[4984]: I0130 10:41:12.090695 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:12 crc kubenswrapper[4984]: E0130 10:41:12.091687 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:12 crc kubenswrapper[4984]: I0130 10:41:12.275956 4984 generic.go:334] "Generic (PLEG): container finished" podID="8414dabf-1fa1-4a4c-8db5-55ef7397164d" containerID="82ecbb803ee39720407c4c93639d911fbdd5cca24aaa0b709401e13cd1f3ac74" exitCode=0 Jan 30 10:41:12 crc kubenswrapper[4984]: I0130 10:41:12.276004 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" event={"ID":"8414dabf-1fa1-4a4c-8db5-55ef7397164d","Type":"ContainerDied","Data":"82ecbb803ee39720407c4c93639d911fbdd5cca24aaa0b709401e13cd1f3ac74"} Jan 30 10:41:13 crc kubenswrapper[4984]: I0130 10:41:13.808445 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.002572 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") pod \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.002682 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") pod \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.002724 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") pod \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.011059 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx" (OuterVolumeSpecName: "kube-api-access-46bkx") pod "8414dabf-1fa1-4a4c-8db5-55ef7397164d" (UID: "8414dabf-1fa1-4a4c-8db5-55ef7397164d"). InnerVolumeSpecName "kube-api-access-46bkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.031744 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8414dabf-1fa1-4a4c-8db5-55ef7397164d" (UID: "8414dabf-1fa1-4a4c-8db5-55ef7397164d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.035308 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory" (OuterVolumeSpecName: "inventory") pod "8414dabf-1fa1-4a4c-8db5-55ef7397164d" (UID: "8414dabf-1fa1-4a4c-8db5-55ef7397164d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.107080 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") on node \"crc\" DevicePath \"\"" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.107126 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.107138 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.309710 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" event={"ID":"8414dabf-1fa1-4a4c-8db5-55ef7397164d","Type":"ContainerDied","Data":"fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62"} Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.310116 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.309810 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.433626 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg"] Jan 30 10:41:14 crc kubenswrapper[4984]: E0130 10:41:14.434223 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8414dabf-1fa1-4a4c-8db5-55ef7397164d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.434280 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8414dabf-1fa1-4a4c-8db5-55ef7397164d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.434618 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8414dabf-1fa1-4a4c-8db5-55ef7397164d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.435637 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.439889 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.440434 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.440660 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.440911 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.453615 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg"] Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.620352 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.620987 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.621222 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.723499 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.723586 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.723622 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.730067 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.730180 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.745627 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.768961 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:15 crc kubenswrapper[4984]: I0130 10:41:15.356496 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg"] Jan 30 10:41:15 crc kubenswrapper[4984]: W0130 10:41:15.362083 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded90c997_eddb_4afb_ae0d_31dd3ef4c485.slice/crio-e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557 WatchSource:0}: Error finding container e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557: Status 404 returned error can't find the container with id e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557 Jan 30 10:41:16 crc kubenswrapper[4984]: I0130 10:41:16.329040 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" event={"ID":"ed90c997-eddb-4afb-ae0d-31dd3ef4c485","Type":"ContainerStarted","Data":"0a9f391398259516c72ece0ca377a1d28d2d067e8bd53fb4fc4fa3f92e8b395d"} Jan 30 10:41:16 crc kubenswrapper[4984]: I0130 10:41:16.329492 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" event={"ID":"ed90c997-eddb-4afb-ae0d-31dd3ef4c485","Type":"ContainerStarted","Data":"e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557"} Jan 30 10:41:16 crc kubenswrapper[4984]: I0130 10:41:16.348359 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" podStartSLOduration=1.751730427 podStartE2EDuration="2.348331376s" podCreationTimestamp="2026-01-30 10:41:14 +0000 UTC" firstStartedPulling="2026-01-30 10:41:15.365926813 +0000 UTC m=+1779.932230637" lastFinishedPulling="2026-01-30 10:41:15.962527742 +0000 UTC m=+1780.528831586" observedRunningTime="2026-01-30 10:41:16.342965211 +0000 UTC m=+1780.909269065" watchObservedRunningTime="2026-01-30 10:41:16.348331376 +0000 UTC m=+1780.914635240" Jan 30 10:41:22 crc kubenswrapper[4984]: I0130 10:41:22.307427 4984 scope.go:117] "RemoveContainer" containerID="0e27973ea9b1e09e6fd759eac37e1b5558d22ece2091da32401b555f34855ccf" Jan 30 10:41:22 crc kubenswrapper[4984]: I0130 10:41:22.359876 4984 scope.go:117] "RemoveContainer" containerID="429ee7ba89918111f347a6702ac0f612b104b982e26e880afe61da5e67302534" Jan 30 10:41:23 crc kubenswrapper[4984]: I0130 10:41:23.090897 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:23 crc kubenswrapper[4984]: E0130 10:41:23.091881 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:29 crc kubenswrapper[4984]: I0130 10:41:29.057156 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:41:29 crc kubenswrapper[4984]: I0130 10:41:29.072313 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:41:30 crc kubenswrapper[4984]: I0130 10:41:30.110242 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" path="/var/lib/kubelet/pods/e6ce38a2-070f-4aac-9495-d27d915c5ae1/volumes" Jan 30 10:41:34 crc kubenswrapper[4984]: I0130 10:41:34.090144 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:34 crc kubenswrapper[4984]: E0130 10:41:34.090769 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:47 crc kubenswrapper[4984]: I0130 10:41:47.090133 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:47 crc kubenswrapper[4984]: E0130 10:41:47.091442 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:56 crc kubenswrapper[4984]: I0130 10:41:56.061485 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:41:56 crc kubenswrapper[4984]: I0130 10:41:56.073287 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:41:56 crc kubenswrapper[4984]: I0130 10:41:56.108667 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" path="/var/lib/kubelet/pods/3048d738-67a2-417f-91ca-8993f4b557f1/volumes" Jan 30 10:41:57 crc kubenswrapper[4984]: I0130 10:41:57.036120 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:41:57 crc kubenswrapper[4984]: I0130 10:41:57.046054 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:41:58 crc kubenswrapper[4984]: I0130 10:41:58.109695 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" path="/var/lib/kubelet/pods/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1/volumes" Jan 30 10:41:59 crc kubenswrapper[4984]: I0130 10:41:59.030419 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:41:59 crc kubenswrapper[4984]: I0130 10:41:59.039992 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:42:00 crc kubenswrapper[4984]: I0130 10:42:00.121173 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" path="/var/lib/kubelet/pods/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80/volumes" Jan 30 10:42:01 crc kubenswrapper[4984]: I0130 10:42:01.091043 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:42:01 crc kubenswrapper[4984]: E0130 10:42:01.091728 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:42:09 crc kubenswrapper[4984]: I0130 10:42:09.056097 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:42:09 crc kubenswrapper[4984]: I0130 10:42:09.072735 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:42:10 crc kubenswrapper[4984]: I0130 10:42:10.102464 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2405c6ec-2510-4786-a602-ae85d358ed1f" path="/var/lib/kubelet/pods/2405c6ec-2510-4786-a602-ae85d358ed1f/volumes" Jan 30 10:42:16 crc kubenswrapper[4984]: I0130 10:42:16.097175 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:42:16 crc kubenswrapper[4984]: E0130 10:42:16.098165 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.437539 4984 scope.go:117] "RemoveContainer" containerID="71b37a694edb5502847d9b98becba6b55ffee4b768b800a7abda8cfa9dacfecb" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.464588 4984 scope.go:117] "RemoveContainer" containerID="f262460637877d4f5daeebd4c5ff5dbc2e5b82919bca6faedbbb9bbf414ca732" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.585203 4984 scope.go:117] "RemoveContainer" containerID="ff2e43e014ee433edf02ecd3b11995f34ff686f322770fec87dcc986576c77fd" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.635887 4984 scope.go:117] "RemoveContainer" containerID="39ac005f3b0418711d3d897077b35efc4095cfe3b629a62736c2db0f861264f1" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.707223 4984 scope.go:117] "RemoveContainer" containerID="886c26fc093739c495beed5c6f76e0e1f2d0d794ded30c68297ca382924af529" Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.045178 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.059178 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.066421 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.073669 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.080427 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.087449 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.101019 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" path="/var/lib/kubelet/pods/0b0be8dd-7b50-43e1-b223-8d5082a0c499/volumes" Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.102278 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" path="/var/lib/kubelet/pods/61ce47a3-89a8-45f2-809e-9aaab0e718e2/volumes" Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.103171 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" path="/var/lib/kubelet/pods/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24/volumes" Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.029659 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.038328 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.052282 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.066690 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.073317 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.080733 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:42:28 crc kubenswrapper[4984]: I0130 10:42:28.102472 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e68f06-af93-45d0-bf19-26469cac41f1" path="/var/lib/kubelet/pods/24e68f06-af93-45d0-bf19-26469cac41f1/volumes" Jan 30 10:42:28 crc kubenswrapper[4984]: I0130 10:42:28.103234 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c78c96a-fba2-4de8-ab70-a16d31722959" path="/var/lib/kubelet/pods/3c78c96a-fba2-4de8-ab70-a16d31722959/volumes" Jan 30 10:42:28 crc kubenswrapper[4984]: I0130 10:42:28.103769 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" path="/var/lib/kubelet/pods/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1/volumes" Jan 30 10:42:31 crc kubenswrapper[4984]: I0130 10:42:31.090286 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:42:31 crc kubenswrapper[4984]: E0130 10:42:31.090915 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:42:43 crc kubenswrapper[4984]: I0130 10:42:43.090297 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:42:43 crc kubenswrapper[4984]: I0130 10:42:43.143801 4984 generic.go:334] "Generic (PLEG): container finished" podID="ed90c997-eddb-4afb-ae0d-31dd3ef4c485" containerID="0a9f391398259516c72ece0ca377a1d28d2d067e8bd53fb4fc4fa3f92e8b395d" exitCode=0 Jan 30 10:42:43 crc kubenswrapper[4984]: I0130 10:42:43.143858 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" event={"ID":"ed90c997-eddb-4afb-ae0d-31dd3ef4c485","Type":"ContainerDied","Data":"0a9f391398259516c72ece0ca377a1d28d2d067e8bd53fb4fc4fa3f92e8b395d"} Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.154936 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e"} Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.633209 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.698908 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") pod \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.699140 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") pod \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.699311 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") pod \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.707467 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2" (OuterVolumeSpecName: "kube-api-access-72sz2") pod "ed90c997-eddb-4afb-ae0d-31dd3ef4c485" (UID: "ed90c997-eddb-4afb-ae0d-31dd3ef4c485"). InnerVolumeSpecName "kube-api-access-72sz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.737265 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory" (OuterVolumeSpecName: "inventory") pod "ed90c997-eddb-4afb-ae0d-31dd3ef4c485" (UID: "ed90c997-eddb-4afb-ae0d-31dd3ef4c485"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.757009 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed90c997-eddb-4afb-ae0d-31dd3ef4c485" (UID: "ed90c997-eddb-4afb-ae0d-31dd3ef4c485"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.801360 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.801513 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.801572 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.168186 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" event={"ID":"ed90c997-eddb-4afb-ae0d-31dd3ef4c485","Type":"ContainerDied","Data":"e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557"} Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.168235 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.168235 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.250466 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5"] Jan 30 10:42:45 crc kubenswrapper[4984]: E0130 10:42:45.250881 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed90c997-eddb-4afb-ae0d-31dd3ef4c485" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.250900 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed90c997-eddb-4afb-ae0d-31dd3ef4c485" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.251078 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed90c997-eddb-4afb-ae0d-31dd3ef4c485" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.251688 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.254022 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.254208 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.254388 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.259063 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.270116 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5"] Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.311480 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.311589 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.311666 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.413115 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.413302 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.413344 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.417889 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.426890 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.438767 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.602887 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.932520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5"] Jan 30 10:42:46 crc kubenswrapper[4984]: I0130 10:42:46.181177 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" event={"ID":"d0aef065-96aa-4cd6-9069-627c5f97fcc3","Type":"ContainerStarted","Data":"d54ffc9a1ce5fadca0d94660e4c1b921690202def7ca273571df9a391b864e3d"} Jan 30 10:42:47 crc kubenswrapper[4984]: I0130 10:42:47.214893 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" event={"ID":"d0aef065-96aa-4cd6-9069-627c5f97fcc3","Type":"ContainerStarted","Data":"a85fad9eece6221ac595f3a8ec29e117125f3a05c44dc9496af9f4e0d191f1af"} Jan 30 10:42:47 crc kubenswrapper[4984]: I0130 10:42:47.242701 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" podStartSLOduration=1.82201923 podStartE2EDuration="2.242682527s" podCreationTimestamp="2026-01-30 10:42:45 +0000 UTC" firstStartedPulling="2026-01-30 10:42:45.936292763 +0000 UTC m=+1870.502596577" lastFinishedPulling="2026-01-30 10:42:46.35695605 +0000 UTC m=+1870.923259874" observedRunningTime="2026-01-30 10:42:47.233034726 +0000 UTC m=+1871.799338560" watchObservedRunningTime="2026-01-30 10:42:47.242682527 +0000 UTC m=+1871.808986361" Jan 30 10:42:51 crc kubenswrapper[4984]: I0130 10:42:51.262103 4984 generic.go:334] "Generic (PLEG): container finished" podID="d0aef065-96aa-4cd6-9069-627c5f97fcc3" containerID="a85fad9eece6221ac595f3a8ec29e117125f3a05c44dc9496af9f4e0d191f1af" exitCode=0 Jan 30 10:42:51 crc kubenswrapper[4984]: I0130 10:42:51.262151 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" event={"ID":"d0aef065-96aa-4cd6-9069-627c5f97fcc3","Type":"ContainerDied","Data":"a85fad9eece6221ac595f3a8ec29e117125f3a05c44dc9496af9f4e0d191f1af"} Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.748230 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.861394 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") pod \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.861441 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") pod \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.861743 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") pod \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.872582 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv" (OuterVolumeSpecName: "kube-api-access-88sqv") pod "d0aef065-96aa-4cd6-9069-627c5f97fcc3" (UID: "d0aef065-96aa-4cd6-9069-627c5f97fcc3"). InnerVolumeSpecName "kube-api-access-88sqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.897637 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory" (OuterVolumeSpecName: "inventory") pod "d0aef065-96aa-4cd6-9069-627c5f97fcc3" (UID: "d0aef065-96aa-4cd6-9069-627c5f97fcc3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.922325 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d0aef065-96aa-4cd6-9069-627c5f97fcc3" (UID: "d0aef065-96aa-4cd6-9069-627c5f97fcc3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.964354 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.964396 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.964410 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.284749 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" event={"ID":"d0aef065-96aa-4cd6-9069-627c5f97fcc3","Type":"ContainerDied","Data":"d54ffc9a1ce5fadca0d94660e4c1b921690202def7ca273571df9a391b864e3d"} Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.285018 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d54ffc9a1ce5fadca0d94660e4c1b921690202def7ca273571df9a391b864e3d" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.284787 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.367605 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8"] Jan 30 10:42:53 crc kubenswrapper[4984]: E0130 10:42:53.367995 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0aef065-96aa-4cd6-9069-627c5f97fcc3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.368013 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0aef065-96aa-4cd6-9069-627c5f97fcc3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.368180 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0aef065-96aa-4cd6-9069-627c5f97fcc3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.368782 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.370844 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.371319 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.372037 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.372307 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.386389 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8"] Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.473733 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.473792 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.473850 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.575412 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.575607 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.575667 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.581820 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.581973 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.608853 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.695207 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:54 crc kubenswrapper[4984]: I0130 10:42:54.270530 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8"] Jan 30 10:42:54 crc kubenswrapper[4984]: I0130 10:42:54.298365 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" event={"ID":"875c90f8-2855-43ce-993f-fa64c7d92c66","Type":"ContainerStarted","Data":"1df449ca040ea672e9a4cc4bb7727ac79e3da1eb4b7407f61952857b503e1e7e"} Jan 30 10:42:55 crc kubenswrapper[4984]: I0130 10:42:55.310088 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" event={"ID":"875c90f8-2855-43ce-993f-fa64c7d92c66","Type":"ContainerStarted","Data":"cb8c388bbbae2b7c1fb63911f4181dd6ab414387ad4673b99b61a1037666b30a"} Jan 30 10:42:55 crc kubenswrapper[4984]: I0130 10:42:55.332544 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" podStartSLOduration=1.800236569 podStartE2EDuration="2.332524298s" podCreationTimestamp="2026-01-30 10:42:53 +0000 UTC" firstStartedPulling="2026-01-30 10:42:54.274759704 +0000 UTC m=+1878.841063538" lastFinishedPulling="2026-01-30 10:42:54.807047393 +0000 UTC m=+1879.373351267" observedRunningTime="2026-01-30 10:42:55.330833282 +0000 UTC m=+1879.897137106" watchObservedRunningTime="2026-01-30 10:42:55.332524298 +0000 UTC m=+1879.898828132" Jan 30 10:42:56 crc kubenswrapper[4984]: I0130 10:42:56.065083 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:42:56 crc kubenswrapper[4984]: I0130 10:42:56.075895 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:42:56 crc kubenswrapper[4984]: I0130 10:42:56.121989 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" path="/var/lib/kubelet/pods/deaa8458-e32e-4a6f-9e67-3e394d9daa32/volumes" Jan 30 10:43:19 crc kubenswrapper[4984]: I0130 10:43:19.036286 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:43:19 crc kubenswrapper[4984]: I0130 10:43:19.045032 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:43:20 crc kubenswrapper[4984]: I0130 10:43:20.106498 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c9c509-275d-47bc-81f8-755bab6b2be8" path="/var/lib/kubelet/pods/e9c9c509-275d-47bc-81f8-755bab6b2be8/volumes" Jan 30 10:43:22 crc kubenswrapper[4984]: I0130 10:43:22.949064 4984 scope.go:117] "RemoveContainer" containerID="f6d3ea39520182e990fd0bc6891d62649eeb90ca61d761fe228472651906c15d" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.002922 4984 scope.go:117] "RemoveContainer" containerID="6eda3836ac458742c17eeba0173a28f9e62b42b7dbf4d4f433eb7525f26d90e6" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.029462 4984 scope.go:117] "RemoveContainer" containerID="d94d94153d595e4b9ce76157accc6d01c2cb9f1b145e151fe1e75fe78e9c2a57" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.110113 4984 scope.go:117] "RemoveContainer" containerID="73cbe196d056395ed3b9f37ad8135b6261f4b509ecb1bd1d8585347fdf36d081" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.134140 4984 scope.go:117] "RemoveContainer" containerID="b83b864f9215b1b901d3cd0dc5c544dfe0581fd330c80ad8350dc925278bda90" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.182873 4984 scope.go:117] "RemoveContainer" containerID="f9b3187c82aff853cf22b0038f5d38d1cea29bfe3a85c99f377ce27a24d35342" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.214381 4984 scope.go:117] "RemoveContainer" containerID="55c4ad08202caa288e8d7e5822ac5705c3135e2d86feaa79d8724c0c9dd0784d" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.230035 4984 scope.go:117] "RemoveContainer" containerID="9cb5d7c891eea50ab9ba8545dcc17cab4c0d194d18b1326a2f9e72c749d5ea5f" Jan 30 10:43:32 crc kubenswrapper[4984]: I0130 10:43:32.646999 4984 generic.go:334] "Generic (PLEG): container finished" podID="875c90f8-2855-43ce-993f-fa64c7d92c66" containerID="cb8c388bbbae2b7c1fb63911f4181dd6ab414387ad4673b99b61a1037666b30a" exitCode=0 Jan 30 10:43:32 crc kubenswrapper[4984]: I0130 10:43:32.647115 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" event={"ID":"875c90f8-2855-43ce-993f-fa64c7d92c66","Type":"ContainerDied","Data":"cb8c388bbbae2b7c1fb63911f4181dd6ab414387ad4673b99b61a1037666b30a"} Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.136917 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.286058 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") pod \"875c90f8-2855-43ce-993f-fa64c7d92c66\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.286358 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") pod \"875c90f8-2855-43ce-993f-fa64c7d92c66\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.286426 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") pod \"875c90f8-2855-43ce-993f-fa64c7d92c66\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.290858 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh" (OuterVolumeSpecName: "kube-api-access-9qmzh") pod "875c90f8-2855-43ce-993f-fa64c7d92c66" (UID: "875c90f8-2855-43ce-993f-fa64c7d92c66"). InnerVolumeSpecName "kube-api-access-9qmzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.330041 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory" (OuterVolumeSpecName: "inventory") pod "875c90f8-2855-43ce-993f-fa64c7d92c66" (UID: "875c90f8-2855-43ce-993f-fa64c7d92c66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.332599 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "875c90f8-2855-43ce-993f-fa64c7d92c66" (UID: "875c90f8-2855-43ce-993f-fa64c7d92c66"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.388707 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.388753 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.388768 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") on node \"crc\" DevicePath \"\"" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.668608 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" event={"ID":"875c90f8-2855-43ce-993f-fa64c7d92c66","Type":"ContainerDied","Data":"1df449ca040ea672e9a4cc4bb7727ac79e3da1eb4b7407f61952857b503e1e7e"} Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.668748 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df449ca040ea672e9a4cc4bb7727ac79e3da1eb4b7407f61952857b503e1e7e" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.668837 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.824749 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26"] Jan 30 10:43:34 crc kubenswrapper[4984]: E0130 10:43:34.825282 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875c90f8-2855-43ce-993f-fa64c7d92c66" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.825308 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="875c90f8-2855-43ce-993f-fa64c7d92c66" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.825559 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="875c90f8-2855-43ce-993f-fa64c7d92c66" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.826354 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.833190 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.833194 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.833541 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.834684 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.838198 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26"] Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.000859 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.000972 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.001097 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.103542 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.103809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.103988 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.107199 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.115392 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.118376 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.151384 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.765291 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26"] Jan 30 10:43:36 crc kubenswrapper[4984]: I0130 10:43:36.214058 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:43:36 crc kubenswrapper[4984]: I0130 10:43:36.684809 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" event={"ID":"5ca6f868-9db4-483a-bea5-dc471b160721","Type":"ContainerStarted","Data":"a400c4f66e3a3b9465f76d708b851aa725e6f73cf1be151df20ace1e8ece9c1e"} Jan 30 10:43:36 crc kubenswrapper[4984]: I0130 10:43:36.685435 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" event={"ID":"5ca6f868-9db4-483a-bea5-dc471b160721","Type":"ContainerStarted","Data":"7c910ea5dd102522c5d79155c2bd51fd0b9954e92e2175ba805e9766681b1b44"} Jan 30 10:43:36 crc kubenswrapper[4984]: I0130 10:43:36.705492 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" podStartSLOduration=2.267238146 podStartE2EDuration="2.705470936s" podCreationTimestamp="2026-01-30 10:43:34 +0000 UTC" firstStartedPulling="2026-01-30 10:43:35.77348073 +0000 UTC m=+1920.339784554" lastFinishedPulling="2026-01-30 10:43:36.21171352 +0000 UTC m=+1920.778017344" observedRunningTime="2026-01-30 10:43:36.702577488 +0000 UTC m=+1921.268881352" watchObservedRunningTime="2026-01-30 10:43:36.705470936 +0000 UTC m=+1921.271774760" Jan 30 10:43:37 crc kubenswrapper[4984]: I0130 10:43:37.030691 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:43:37 crc kubenswrapper[4984]: I0130 10:43:37.040500 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:43:38 crc kubenswrapper[4984]: I0130 10:43:38.099817 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6148a148-07c4-4584-95ff-10d5e5147954" path="/var/lib/kubelet/pods/6148a148-07c4-4584-95ff-10d5e5147954/volumes" Jan 30 10:44:05 crc kubenswrapper[4984]: I0130 10:44:05.043792 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:44:05 crc kubenswrapper[4984]: I0130 10:44:05.054287 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:44:06 crc kubenswrapper[4984]: I0130 10:44:06.100727 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" path="/var/lib/kubelet/pods/a005f64f-9ec0-4a4a-b64e-9ae00924dce7/volumes" Jan 30 10:44:22 crc kubenswrapper[4984]: I0130 10:44:22.124888 4984 generic.go:334] "Generic (PLEG): container finished" podID="5ca6f868-9db4-483a-bea5-dc471b160721" containerID="a400c4f66e3a3b9465f76d708b851aa725e6f73cf1be151df20ace1e8ece9c1e" exitCode=0 Jan 30 10:44:22 crc kubenswrapper[4984]: I0130 10:44:22.124966 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" event={"ID":"5ca6f868-9db4-483a-bea5-dc471b160721","Type":"ContainerDied","Data":"a400c4f66e3a3b9465f76d708b851aa725e6f73cf1be151df20ace1e8ece9c1e"} Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.384885 4984 scope.go:117] "RemoveContainer" containerID="01f24060ed65c8e2bd6475cb81b1d352cdc388008c24396c142500998835d3df" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.453765 4984 scope.go:117] "RemoveContainer" containerID="899b94de134f9ceca80081ff737a83cc02c723d317671d240f22cc01fff73eb3" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.617736 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.788911 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") pod \"5ca6f868-9db4-483a-bea5-dc471b160721\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.789410 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") pod \"5ca6f868-9db4-483a-bea5-dc471b160721\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.789562 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") pod \"5ca6f868-9db4-483a-bea5-dc471b160721\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.797113 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd" (OuterVolumeSpecName: "kube-api-access-wtcjd") pod "5ca6f868-9db4-483a-bea5-dc471b160721" (UID: "5ca6f868-9db4-483a-bea5-dc471b160721"). InnerVolumeSpecName "kube-api-access-wtcjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.816511 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ca6f868-9db4-483a-bea5-dc471b160721" (UID: "5ca6f868-9db4-483a-bea5-dc471b160721"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.832084 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory" (OuterVolumeSpecName: "inventory") pod "5ca6f868-9db4-483a-bea5-dc471b160721" (UID: "5ca6f868-9db4-483a-bea5-dc471b160721"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.891938 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.892131 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.892147 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.143432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" event={"ID":"5ca6f868-9db4-483a-bea5-dc471b160721","Type":"ContainerDied","Data":"7c910ea5dd102522c5d79155c2bd51fd0b9954e92e2175ba805e9766681b1b44"} Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.143503 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c910ea5dd102522c5d79155c2bd51fd0b9954e92e2175ba805e9766681b1b44" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.143567 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.223696 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ds8rj"] Jan 30 10:44:24 crc kubenswrapper[4984]: E0130 10:44:24.224078 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca6f868-9db4-483a-bea5-dc471b160721" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.224095 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca6f868-9db4-483a-bea5-dc471b160721" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.224310 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca6f868-9db4-483a-bea5-dc471b160721" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.224973 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.227668 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.228050 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.228284 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.239587 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.240588 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ds8rj"] Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.402179 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.402231 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.402467 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.503803 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.503937 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.503966 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.507902 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.520830 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.535050 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.543336 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.903569 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ds8rj"] Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.906029 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:44:25 crc kubenswrapper[4984]: I0130 10:44:25.158485 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" event={"ID":"1e567c3d-d9b0-4be3-ad02-21a342ce33fd","Type":"ContainerStarted","Data":"fcc5131cc8bfa1c54924096fd3c2646015acb95b4b2714383282a62d5b4e58ff"} Jan 30 10:44:28 crc kubenswrapper[4984]: I0130 10:44:28.186554 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" event={"ID":"1e567c3d-d9b0-4be3-ad02-21a342ce33fd","Type":"ContainerStarted","Data":"91b3b2918474900e35a44faff448ecf588a4039058dd642442be233ed68bf211"} Jan 30 10:44:28 crc kubenswrapper[4984]: I0130 10:44:28.202928 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" podStartSLOduration=1.7488395799999998 podStartE2EDuration="4.202904382s" podCreationTimestamp="2026-01-30 10:44:24 +0000 UTC" firstStartedPulling="2026-01-30 10:44:24.905843392 +0000 UTC m=+1969.472147216" lastFinishedPulling="2026-01-30 10:44:27.359908184 +0000 UTC m=+1971.926212018" observedRunningTime="2026-01-30 10:44:28.200724053 +0000 UTC m=+1972.767027887" watchObservedRunningTime="2026-01-30 10:44:28.202904382 +0000 UTC m=+1972.769208226" Jan 30 10:44:35 crc kubenswrapper[4984]: I0130 10:44:35.250195 4984 generic.go:334] "Generic (PLEG): container finished" podID="1e567c3d-d9b0-4be3-ad02-21a342ce33fd" containerID="91b3b2918474900e35a44faff448ecf588a4039058dd642442be233ed68bf211" exitCode=0 Jan 30 10:44:35 crc kubenswrapper[4984]: I0130 10:44:35.250322 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" event={"ID":"1e567c3d-d9b0-4be3-ad02-21a342ce33fd","Type":"ContainerDied","Data":"91b3b2918474900e35a44faff448ecf588a4039058dd642442be233ed68bf211"} Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.691636 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.723263 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") pod \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.723318 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") pod \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.723351 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") pod \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.738078 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j" (OuterVolumeSpecName: "kube-api-access-hdx5j") pod "1e567c3d-d9b0-4be3-ad02-21a342ce33fd" (UID: "1e567c3d-d9b0-4be3-ad02-21a342ce33fd"). InnerVolumeSpecName "kube-api-access-hdx5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.775889 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e567c3d-d9b0-4be3-ad02-21a342ce33fd" (UID: "1e567c3d-d9b0-4be3-ad02-21a342ce33fd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.781612 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1e567c3d-d9b0-4be3-ad02-21a342ce33fd" (UID: "1e567c3d-d9b0-4be3-ad02-21a342ce33fd"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.825428 4984 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.825467 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.825481 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.293633 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" event={"ID":"1e567c3d-d9b0-4be3-ad02-21a342ce33fd","Type":"ContainerDied","Data":"fcc5131cc8bfa1c54924096fd3c2646015acb95b4b2714383282a62d5b4e58ff"} Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.293674 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.293694 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc5131cc8bfa1c54924096fd3c2646015acb95b4b2714383282a62d5b4e58ff" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.366102 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn"] Jan 30 10:44:37 crc kubenswrapper[4984]: E0130 10:44:37.366619 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e567c3d-d9b0-4be3-ad02-21a342ce33fd" containerName="ssh-known-hosts-edpm-deployment" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.366634 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e567c3d-d9b0-4be3-ad02-21a342ce33fd" containerName="ssh-known-hosts-edpm-deployment" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.366855 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e567c3d-d9b0-4be3-ad02-21a342ce33fd" containerName="ssh-known-hosts-edpm-deployment" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.367563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.372963 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.373384 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.373587 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.373654 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.376768 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn"] Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.438834 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.438908 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.438978 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.543001 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.543088 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.543126 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.551053 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.552534 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.574624 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.766900 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:38 crc kubenswrapper[4984]: W0130 10:44:38.306520 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb337ec46_c5ba_4b83_91f7_ad4b826d9595.slice/crio-5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e WatchSource:0}: Error finding container 5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e: Status 404 returned error can't find the container with id 5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e Jan 30 10:44:38 crc kubenswrapper[4984]: I0130 10:44:38.310579 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn"] Jan 30 10:44:39 crc kubenswrapper[4984]: I0130 10:44:39.311107 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" event={"ID":"b337ec46-c5ba-4b83-91f7-ad4b826d9595","Type":"ContainerStarted","Data":"5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e"} Jan 30 10:44:40 crc kubenswrapper[4984]: I0130 10:44:40.321590 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" event={"ID":"b337ec46-c5ba-4b83-91f7-ad4b826d9595","Type":"ContainerStarted","Data":"d17e639236d21577e05316cdaa8c13e0530cba019cd6740ff4bc7910d13ac8fb"} Jan 30 10:44:40 crc kubenswrapper[4984]: I0130 10:44:40.351132 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" podStartSLOduration=2.743996561 podStartE2EDuration="3.351110561s" podCreationTimestamp="2026-01-30 10:44:37 +0000 UTC" firstStartedPulling="2026-01-30 10:44:38.308921699 +0000 UTC m=+1982.875225523" lastFinishedPulling="2026-01-30 10:44:38.916035689 +0000 UTC m=+1983.482339523" observedRunningTime="2026-01-30 10:44:40.34363299 +0000 UTC m=+1984.909936834" watchObservedRunningTime="2026-01-30 10:44:40.351110561 +0000 UTC m=+1984.917414395" Jan 30 10:44:47 crc kubenswrapper[4984]: I0130 10:44:47.390538 4984 generic.go:334] "Generic (PLEG): container finished" podID="b337ec46-c5ba-4b83-91f7-ad4b826d9595" containerID="d17e639236d21577e05316cdaa8c13e0530cba019cd6740ff4bc7910d13ac8fb" exitCode=0 Jan 30 10:44:47 crc kubenswrapper[4984]: I0130 10:44:47.390631 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" event={"ID":"b337ec46-c5ba-4b83-91f7-ad4b826d9595","Type":"ContainerDied","Data":"d17e639236d21577e05316cdaa8c13e0530cba019cd6740ff4bc7910d13ac8fb"} Jan 30 10:44:48 crc kubenswrapper[4984]: I0130 10:44:48.915689 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.077327 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") pod \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.077546 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") pod \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.077763 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") pod \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.086750 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v" (OuterVolumeSpecName: "kube-api-access-gt68v") pod "b337ec46-c5ba-4b83-91f7-ad4b826d9595" (UID: "b337ec46-c5ba-4b83-91f7-ad4b826d9595"). InnerVolumeSpecName "kube-api-access-gt68v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.113845 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b337ec46-c5ba-4b83-91f7-ad4b826d9595" (UID: "b337ec46-c5ba-4b83-91f7-ad4b826d9595"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.138024 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory" (OuterVolumeSpecName: "inventory") pod "b337ec46-c5ba-4b83-91f7-ad4b826d9595" (UID: "b337ec46-c5ba-4b83-91f7-ad4b826d9595"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.180858 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.180895 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.180909 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.415434 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" event={"ID":"b337ec46-c5ba-4b83-91f7-ad4b826d9595","Type":"ContainerDied","Data":"5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e"} Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.415496 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.415607 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.527076 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45"] Jan 30 10:44:49 crc kubenswrapper[4984]: E0130 10:44:49.527634 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b337ec46-c5ba-4b83-91f7-ad4b826d9595" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.527665 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b337ec46-c5ba-4b83-91f7-ad4b826d9595" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.527987 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b337ec46-c5ba-4b83-91f7-ad4b826d9595" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.528930 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.533322 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.533585 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.533795 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.541418 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.547984 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45"] Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.696972 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.697093 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.697225 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.799219 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.799342 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.799424 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.805556 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.807310 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.819851 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.860924 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:50 crc kubenswrapper[4984]: I0130 10:44:50.435589 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45"] Jan 30 10:44:51 crc kubenswrapper[4984]: I0130 10:44:51.443902 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" event={"ID":"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78","Type":"ContainerStarted","Data":"8f191e7b36fb17ac0140ff17fc544daf58357b9629e194861fc73aa23f34f254"} Jan 30 10:44:52 crc kubenswrapper[4984]: I0130 10:44:52.454166 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" event={"ID":"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78","Type":"ContainerStarted","Data":"c3dc68b4fbe0d7753c9064d2733e6fc5c7251dd1c447e7d97f3ad783a81ee018"} Jan 30 10:44:52 crc kubenswrapper[4984]: I0130 10:44:52.471478 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" podStartSLOduration=2.663590802 podStartE2EDuration="3.471462562s" podCreationTimestamp="2026-01-30 10:44:49 +0000 UTC" firstStartedPulling="2026-01-30 10:44:50.443592405 +0000 UTC m=+1995.009896229" lastFinishedPulling="2026-01-30 10:44:51.251464125 +0000 UTC m=+1995.817767989" observedRunningTime="2026-01-30 10:44:52.468938604 +0000 UTC m=+1997.035242428" watchObservedRunningTime="2026-01-30 10:44:52.471462562 +0000 UTC m=+1997.037766386" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.146113 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb"] Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.148850 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.153227 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.153307 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.158051 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb"] Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.343294 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.343563 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.343660 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.445747 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.445895 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.446022 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.447064 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.455352 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.483099 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.488211 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.010390 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb"] Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.544240 4984 generic.go:334] "Generic (PLEG): container finished" podID="b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" containerID="c3dc68b4fbe0d7753c9064d2733e6fc5c7251dd1c447e7d97f3ad783a81ee018" exitCode=0 Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.544308 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" event={"ID":"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78","Type":"ContainerDied","Data":"c3dc68b4fbe0d7753c9064d2733e6fc5c7251dd1c447e7d97f3ad783a81ee018"} Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.548175 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" event={"ID":"c04603fc-717d-4780-886e-4e449999ca6c","Type":"ContainerStarted","Data":"cca504e9dee208e1eb7fe8fe7be1f987f4e4057900a16113540eac221bbbcaa7"} Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.548209 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" event={"ID":"c04603fc-717d-4780-886e-4e449999ca6c","Type":"ContainerStarted","Data":"bc3a18901f1569393bfcf2d09999123881433a6bca4c55d907f059140dad5e74"} Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.583879 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" podStartSLOduration=1.5838605019999998 podStartE2EDuration="1.583860502s" podCreationTimestamp="2026-01-30 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:45:01.574755657 +0000 UTC m=+2006.141059491" watchObservedRunningTime="2026-01-30 10:45:01.583860502 +0000 UTC m=+2006.150164336" Jan 30 10:45:02 crc kubenswrapper[4984]: I0130 10:45:02.560528 4984 generic.go:334] "Generic (PLEG): container finished" podID="c04603fc-717d-4780-886e-4e449999ca6c" containerID="cca504e9dee208e1eb7fe8fe7be1f987f4e4057900a16113540eac221bbbcaa7" exitCode=0 Jan 30 10:45:02 crc kubenswrapper[4984]: I0130 10:45:02.561618 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" event={"ID":"c04603fc-717d-4780-886e-4e449999ca6c","Type":"ContainerDied","Data":"cca504e9dee208e1eb7fe8fe7be1f987f4e4057900a16113540eac221bbbcaa7"} Jan 30 10:45:02 crc kubenswrapper[4984]: I0130 10:45:02.973360 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.001090 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.001139 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.097540 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") pod \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.097638 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") pod \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.097699 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") pod \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.103711 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv" (OuterVolumeSpecName: "kube-api-access-4fjgv") pod "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" (UID: "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78"). InnerVolumeSpecName "kube-api-access-4fjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.129958 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" (UID: "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.132120 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory" (OuterVolumeSpecName: "inventory") pod "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" (UID: "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.200232 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.200314 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.200324 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.589696 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.590301 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" event={"ID":"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78","Type":"ContainerDied","Data":"8f191e7b36fb17ac0140ff17fc544daf58357b9629e194861fc73aa23f34f254"} Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.590346 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f191e7b36fb17ac0140ff17fc544daf58357b9629e194861fc73aa23f34f254" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.664396 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9"] Jan 30 10:45:03 crc kubenswrapper[4984]: E0130 10:45:03.665050 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.665073 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.665355 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.666101 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.668360 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.668673 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.668926 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.669539 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.669687 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.669908 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.670020 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.670140 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.722472 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9"] Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.809891 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.809957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810028 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810052 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810082 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810111 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810134 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810348 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810461 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810514 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810576 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810717 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810882 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: E0130 10:45:03.816798 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6b5ab38_6c9b_4526_bbee_d3a4c460ea78.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6b5ab38_6c9b_4526_bbee_d3a4c460ea78.slice/crio-8f191e7b36fb17ac0140ff17fc544daf58357b9629e194861fc73aa23f34f254\": RecentStats: unable to find data in memory cache]" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915044 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915104 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915154 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915200 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915268 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915322 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915373 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915410 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915466 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915529 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915639 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915752 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915828 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915876 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.920941 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.920974 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.921525 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.921945 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.922421 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.922529 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.923222 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.923502 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.923541 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.924087 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.924170 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.924339 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.932135 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.939748 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.014730 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.018755 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.120195 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") pod \"c04603fc-717d-4780-886e-4e449999ca6c\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.120671 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") pod \"c04603fc-717d-4780-886e-4e449999ca6c\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.120818 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") pod \"c04603fc-717d-4780-886e-4e449999ca6c\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.121051 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "c04603fc-717d-4780-886e-4e449999ca6c" (UID: "c04603fc-717d-4780-886e-4e449999ca6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.121547 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.124612 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c04603fc-717d-4780-886e-4e449999ca6c" (UID: "c04603fc-717d-4780-886e-4e449999ca6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.127977 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p" (OuterVolumeSpecName: "kube-api-access-6g74p") pod "c04603fc-717d-4780-886e-4e449999ca6c" (UID: "c04603fc-717d-4780-886e-4e449999ca6c"). InnerVolumeSpecName "kube-api-access-6g74p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.223891 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.223949 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.563740 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9"] Jan 30 10:45:04 crc kubenswrapper[4984]: W0130 10:45:04.570737 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod908eb334_fac2_41ed_96d6_d7c80f8e98b3.slice/crio-455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba WatchSource:0}: Error finding container 455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba: Status 404 returned error can't find the container with id 455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.604108 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" event={"ID":"908eb334-fac2-41ed-96d6-d7c80f8e98b3","Type":"ContainerStarted","Data":"455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba"} Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.607421 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.611348 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" event={"ID":"c04603fc-717d-4780-886e-4e449999ca6c","Type":"ContainerDied","Data":"bc3a18901f1569393bfcf2d09999123881433a6bca4c55d907f059140dad5e74"} Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.611431 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc3a18901f1569393bfcf2d09999123881433a6bca4c55d907f059140dad5e74" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.656651 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.664060 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:45:05 crc kubenswrapper[4984]: I0130 10:45:05.622109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" event={"ID":"908eb334-fac2-41ed-96d6-d7c80f8e98b3","Type":"ContainerStarted","Data":"e4f9b270c703ea74092e0144e5aec51be20128d2d9595e52b0665ee02d376f8a"} Jan 30 10:45:05 crc kubenswrapper[4984]: I0130 10:45:05.645979 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" podStartSLOduration=2.082902925 podStartE2EDuration="2.645964798s" podCreationTimestamp="2026-01-30 10:45:03 +0000 UTC" firstStartedPulling="2026-01-30 10:45:04.574813093 +0000 UTC m=+2009.141116917" lastFinishedPulling="2026-01-30 10:45:05.137874916 +0000 UTC m=+2009.704178790" observedRunningTime="2026-01-30 10:45:05.642551166 +0000 UTC m=+2010.208855020" watchObservedRunningTime="2026-01-30 10:45:05.645964798 +0000 UTC m=+2010.212268622" Jan 30 10:45:06 crc kubenswrapper[4984]: I0130 10:45:06.102387 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbdde9dd-69cf-405d-9143-1739e3acbdde" path="/var/lib/kubelet/pods/fbdde9dd-69cf-405d-9143-1739e3acbdde/volumes" Jan 30 10:45:23 crc kubenswrapper[4984]: I0130 10:45:23.565519 4984 scope.go:117] "RemoveContainer" containerID="b0a94db102107430e1a69e0b74ea3c70e83060546d2f77d6bc452f21055f639a" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.223304 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:45:28 crc kubenswrapper[4984]: E0130 10:45:28.224399 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04603fc-717d-4780-886e-4e449999ca6c" containerName="collect-profiles" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.224418 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04603fc-717d-4780-886e-4e449999ca6c" containerName="collect-profiles" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.224685 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04603fc-717d-4780-886e-4e449999ca6c" containerName="collect-profiles" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.226658 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.235636 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.271525 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.271589 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.271673 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.374304 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.374368 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.374451 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.375352 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.375351 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.394899 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.549028 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:29 crc kubenswrapper[4984]: I0130 10:45:29.110077 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:45:29 crc kubenswrapper[4984]: I0130 10:45:29.853420 4984 generic.go:334] "Generic (PLEG): container finished" podID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerID="fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f" exitCode=0 Jan 30 10:45:29 crc kubenswrapper[4984]: I0130 10:45:29.853540 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerDied","Data":"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f"} Jan 30 10:45:29 crc kubenswrapper[4984]: I0130 10:45:29.853740 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerStarted","Data":"24a1148d15bee715c556a07badff1472f1bb6f79211e4948aa32e4f198a42f23"} Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.021980 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.024943 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.051792 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.118519 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.118591 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.118663 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.220986 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.221926 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.222058 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.222563 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.222614 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.250960 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.346500 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.623277 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.625318 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.648862 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.732074 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.732653 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.732915 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.825736 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834223 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834296 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834434 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834792 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834874 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.854218 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.889566 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerStarted","Data":"a6b266db4cc117a7bc14e19332bd11fa3d2527d71ca0df6e62ce92ee33821566"} Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.952350 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:31 crc kubenswrapper[4984]: I0130 10:45:31.892340 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:31 crc kubenswrapper[4984]: W0130 10:45:31.901922 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod144fba12_676d_457b_83f6_6195f089a240.slice/crio-e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5 WatchSource:0}: Error finding container e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5: Status 404 returned error can't find the container with id e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5 Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.906320 4984 generic.go:334] "Generic (PLEG): container finished" podID="144fba12-676d-457b-83f6-6195f089a240" containerID="aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5" exitCode=0 Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.906556 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerDied","Data":"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5"} Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.906687 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerStarted","Data":"e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5"} Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.908475 4984 generic.go:334] "Generic (PLEG): container finished" podID="86db9413-efcb-4f87-8605-317f50fb468d" containerID="54816c0cf5b8eb3e710c434a7440b8072cfb0783b73c9b74be19869c4c444e35" exitCode=0 Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.908559 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerDied","Data":"54816c0cf5b8eb3e710c434a7440b8072cfb0783b73c9b74be19869c4c444e35"} Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.915161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerStarted","Data":"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa"} Jan 30 10:45:33 crc kubenswrapper[4984]: I0130 10:45:33.000695 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:45:33 crc kubenswrapper[4984]: I0130 10:45:33.000757 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:45:33 crc kubenswrapper[4984]: I0130 10:45:33.923859 4984 generic.go:334] "Generic (PLEG): container finished" podID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerID="42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa" exitCode=0 Jan 30 10:45:33 crc kubenswrapper[4984]: I0130 10:45:33.923906 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerDied","Data":"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa"} Jan 30 10:45:34 crc kubenswrapper[4984]: I0130 10:45:34.934607 4984 generic.go:334] "Generic (PLEG): container finished" podID="86db9413-efcb-4f87-8605-317f50fb468d" containerID="de219b46efc681590dfc9f6c663921083e34944cc19d08e21c367c0cf53ca7e4" exitCode=0 Jan 30 10:45:34 crc kubenswrapper[4984]: I0130 10:45:34.934713 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerDied","Data":"de219b46efc681590dfc9f6c663921083e34944cc19d08e21c367c0cf53ca7e4"} Jan 30 10:45:36 crc kubenswrapper[4984]: I0130 10:45:36.976949 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerStarted","Data":"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9"} Jan 30 10:45:38 crc kubenswrapper[4984]: I0130 10:45:38.005906 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vvmbl" podStartSLOduration=5.184509049 podStartE2EDuration="10.005891115s" podCreationTimestamp="2026-01-30 10:45:28 +0000 UTC" firstStartedPulling="2026-01-30 10:45:29.854954144 +0000 UTC m=+2034.421257968" lastFinishedPulling="2026-01-30 10:45:34.6763362 +0000 UTC m=+2039.242640034" observedRunningTime="2026-01-30 10:45:38.00312578 +0000 UTC m=+2042.569429604" watchObservedRunningTime="2026-01-30 10:45:38.005891115 +0000 UTC m=+2042.572194939" Jan 30 10:45:38 crc kubenswrapper[4984]: I0130 10:45:38.549903 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:38 crc kubenswrapper[4984]: I0130 10:45:38.549982 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:39 crc kubenswrapper[4984]: I0130 10:45:39.600419 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" probeResult="failure" output=< Jan 30 10:45:39 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:45:39 crc kubenswrapper[4984]: > Jan 30 10:45:40 crc kubenswrapper[4984]: I0130 10:45:40.008173 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerStarted","Data":"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a"} Jan 30 10:45:42 crc kubenswrapper[4984]: I0130 10:45:42.031765 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerStarted","Data":"ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7"} Jan 30 10:45:42 crc kubenswrapper[4984]: I0130 10:45:42.043673 4984 generic.go:334] "Generic (PLEG): container finished" podID="144fba12-676d-457b-83f6-6195f089a240" containerID="f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a" exitCode=0 Jan 30 10:45:42 crc kubenswrapper[4984]: I0130 10:45:42.043810 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerDied","Data":"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a"} Jan 30 10:45:42 crc kubenswrapper[4984]: I0130 10:45:42.081107 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7b29f" podStartSLOduration=5.016924191 podStartE2EDuration="13.081086834s" podCreationTimestamp="2026-01-30 10:45:29 +0000 UTC" firstStartedPulling="2026-01-30 10:45:32.911701637 +0000 UTC m=+2037.478005471" lastFinishedPulling="2026-01-30 10:45:40.97586428 +0000 UTC m=+2045.542168114" observedRunningTime="2026-01-30 10:45:42.062624726 +0000 UTC m=+2046.628928590" watchObservedRunningTime="2026-01-30 10:45:42.081086834 +0000 UTC m=+2046.647390668" Jan 30 10:45:43 crc kubenswrapper[4984]: I0130 10:45:43.053693 4984 generic.go:334] "Generic (PLEG): container finished" podID="908eb334-fac2-41ed-96d6-d7c80f8e98b3" containerID="e4f9b270c703ea74092e0144e5aec51be20128d2d9595e52b0665ee02d376f8a" exitCode=0 Jan 30 10:45:43 crc kubenswrapper[4984]: I0130 10:45:43.053992 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" event={"ID":"908eb334-fac2-41ed-96d6-d7c80f8e98b3","Type":"ContainerDied","Data":"e4f9b270c703ea74092e0144e5aec51be20128d2d9595e52b0665ee02d376f8a"} Jan 30 10:45:43 crc kubenswrapper[4984]: I0130 10:45:43.057071 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerStarted","Data":"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428"} Jan 30 10:45:43 crc kubenswrapper[4984]: I0130 10:45:43.105628 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vxbxn" podStartSLOduration=4.376833429 podStartE2EDuration="13.105612113s" podCreationTimestamp="2026-01-30 10:45:30 +0000 UTC" firstStartedPulling="2026-01-30 10:45:33.925293271 +0000 UTC m=+2038.491597105" lastFinishedPulling="2026-01-30 10:45:42.654071955 +0000 UTC m=+2047.220375789" observedRunningTime="2026-01-30 10:45:43.100754472 +0000 UTC m=+2047.667058296" watchObservedRunningTime="2026-01-30 10:45:43.105612113 +0000 UTC m=+2047.671915937" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.479990 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611267 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611396 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611430 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611489 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611525 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611558 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611639 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611745 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611777 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611805 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611834 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611864 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611887 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611911 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.618553 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.619749 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.621610 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628709 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628731 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628731 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628824 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk" (OuterVolumeSpecName: "kube-api-access-q9xfk") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "kube-api-access-q9xfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628866 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628843 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.629660 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.630862 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.646845 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.655470 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory" (OuterVolumeSpecName: "inventory") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713536 4984 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713574 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713588 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713597 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713609 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713618 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713627 4984 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713636 4984 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713644 4984 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713652 4984 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713660 4984 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713670 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713678 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713713 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.076343 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" event={"ID":"908eb334-fac2-41ed-96d6-d7c80f8e98b3","Type":"ContainerDied","Data":"455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba"} Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.076616 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.076684 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.239765 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6"] Jan 30 10:45:45 crc kubenswrapper[4984]: E0130 10:45:45.240194 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908eb334-fac2-41ed-96d6-d7c80f8e98b3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.240214 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="908eb334-fac2-41ed-96d6-d7c80f8e98b3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.240402 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="908eb334-fac2-41ed-96d6-d7c80f8e98b3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.241022 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.246710 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.247341 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.247658 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.248059 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.248423 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.249790 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6"] Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.324600 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.324787 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.324933 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.324989 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.325341 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.427631 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.427776 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.427827 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.427890 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.428027 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.429305 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.436661 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.438906 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.440713 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.449292 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.612203 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:46 crc kubenswrapper[4984]: I0130 10:45:46.358814 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6"] Jan 30 10:45:47 crc kubenswrapper[4984]: I0130 10:45:47.098039 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" event={"ID":"2f986324-c570-4c65-aed1-952aa2538af8","Type":"ContainerStarted","Data":"5441f0d6bae31f22da1ba983066bd904726100824475ad7b233ba6ccd9255c43"} Jan 30 10:45:49 crc kubenswrapper[4984]: I0130 10:45:49.124456 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" event={"ID":"2f986324-c570-4c65-aed1-952aa2538af8","Type":"ContainerStarted","Data":"3410426cef37882e6e66b81f19bf117fcd08a6957b5586575d68c7a3a2e02ae8"} Jan 30 10:45:49 crc kubenswrapper[4984]: I0130 10:45:49.154436 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" podStartSLOduration=2.462144813 podStartE2EDuration="4.154418686s" podCreationTimestamp="2026-01-30 10:45:45 +0000 UTC" firstStartedPulling="2026-01-30 10:45:46.354840563 +0000 UTC m=+2050.921144377" lastFinishedPulling="2026-01-30 10:45:48.047114396 +0000 UTC m=+2052.613418250" observedRunningTime="2026-01-30 10:45:49.15085574 +0000 UTC m=+2053.717159584" watchObservedRunningTime="2026-01-30 10:45:49.154418686 +0000 UTC m=+2053.720722520" Jan 30 10:45:49 crc kubenswrapper[4984]: I0130 10:45:49.634769 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" probeResult="failure" output=< Jan 30 10:45:49 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:45:49 crc kubenswrapper[4984]: > Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.347360 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.347780 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.412512 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.952936 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.953383 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:51 crc kubenswrapper[4984]: I0130 10:45:51.037784 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:51 crc kubenswrapper[4984]: I0130 10:45:51.233418 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:51 crc kubenswrapper[4984]: I0130 10:45:51.276970 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:52 crc kubenswrapper[4984]: I0130 10:45:52.296418 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:53 crc kubenswrapper[4984]: I0130 10:45:53.698732 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:53 crc kubenswrapper[4984]: I0130 10:45:53.699392 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7b29f" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="registry-server" containerID="cri-o://ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7" gracePeriod=2 Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.184629 4984 generic.go:334] "Generic (PLEG): container finished" podID="86db9413-efcb-4f87-8605-317f50fb468d" containerID="ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7" exitCode=0 Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.184976 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerDied","Data":"ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7"} Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.185039 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerDied","Data":"a6b266db4cc117a7bc14e19332bd11fa3d2527d71ca0df6e62ce92ee33821566"} Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.185056 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b266db4cc117a7bc14e19332bd11fa3d2527d71ca0df6e62ce92ee33821566" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.185038 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vxbxn" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="registry-server" containerID="cri-o://d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" gracePeriod=2 Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.367424 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.431309 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") pod \"86db9413-efcb-4f87-8605-317f50fb468d\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.431351 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") pod \"86db9413-efcb-4f87-8605-317f50fb468d\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.431464 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") pod \"86db9413-efcb-4f87-8605-317f50fb468d\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.432680 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities" (OuterVolumeSpecName: "utilities") pod "86db9413-efcb-4f87-8605-317f50fb468d" (UID: "86db9413-efcb-4f87-8605-317f50fb468d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.439285 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx" (OuterVolumeSpecName: "kube-api-access-c7mmx") pod "86db9413-efcb-4f87-8605-317f50fb468d" (UID: "86db9413-efcb-4f87-8605-317f50fb468d"). InnerVolumeSpecName "kube-api-access-c7mmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.513958 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86db9413-efcb-4f87-8605-317f50fb468d" (UID: "86db9413-efcb-4f87-8605-317f50fb468d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.533572 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.533605 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.533617 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.585842 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.736683 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") pod \"144fba12-676d-457b-83f6-6195f089a240\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.736788 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") pod \"144fba12-676d-457b-83f6-6195f089a240\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.737007 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") pod \"144fba12-676d-457b-83f6-6195f089a240\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.737777 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities" (OuterVolumeSpecName: "utilities") pod "144fba12-676d-457b-83f6-6195f089a240" (UID: "144fba12-676d-457b-83f6-6195f089a240"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.737889 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.740584 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h" (OuterVolumeSpecName: "kube-api-access-8sn6h") pod "144fba12-676d-457b-83f6-6195f089a240" (UID: "144fba12-676d-457b-83f6-6195f089a240"). InnerVolumeSpecName "kube-api-access-8sn6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.814727 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "144fba12-676d-457b-83f6-6195f089a240" (UID: "144fba12-676d-457b-83f6-6195f089a240"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.839237 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.839288 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.193852 4984 generic.go:334] "Generic (PLEG): container finished" podID="144fba12-676d-457b-83f6-6195f089a240" containerID="d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" exitCode=0 Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.193934 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.194430 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.194430 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerDied","Data":"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428"} Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.194482 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerDied","Data":"e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5"} Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.194501 4984 scope.go:117] "RemoveContainer" containerID="d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.230682 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.231095 4984 scope.go:117] "RemoveContainer" containerID="f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.254316 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.256503 4984 scope.go:117] "RemoveContainer" containerID="aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.263915 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.273884 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.304745 4984 scope.go:117] "RemoveContainer" containerID="d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" Jan 30 10:45:55 crc kubenswrapper[4984]: E0130 10:45:55.305279 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428\": container with ID starting with d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428 not found: ID does not exist" containerID="d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.305317 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428"} err="failed to get container status \"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428\": rpc error: code = NotFound desc = could not find container \"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428\": container with ID starting with d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428 not found: ID does not exist" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.305337 4984 scope.go:117] "RemoveContainer" containerID="f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a" Jan 30 10:45:55 crc kubenswrapper[4984]: E0130 10:45:55.305926 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a\": container with ID starting with f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a not found: ID does not exist" containerID="f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.305947 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a"} err="failed to get container status \"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a\": rpc error: code = NotFound desc = could not find container \"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a\": container with ID starting with f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a not found: ID does not exist" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.305959 4984 scope.go:117] "RemoveContainer" containerID="aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5" Jan 30 10:45:55 crc kubenswrapper[4984]: E0130 10:45:55.306164 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5\": container with ID starting with aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5 not found: ID does not exist" containerID="aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.306191 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5"} err="failed to get container status \"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5\": rpc error: code = NotFound desc = could not find container \"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5\": container with ID starting with aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5 not found: ID does not exist" Jan 30 10:45:56 crc kubenswrapper[4984]: I0130 10:45:56.102498 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144fba12-676d-457b-83f6-6195f089a240" path="/var/lib/kubelet/pods/144fba12-676d-457b-83f6-6195f089a240/volumes" Jan 30 10:45:56 crc kubenswrapper[4984]: I0130 10:45:56.103881 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86db9413-efcb-4f87-8605-317f50fb468d" path="/var/lib/kubelet/pods/86db9413-efcb-4f87-8605-317f50fb468d/volumes" Jan 30 10:45:59 crc kubenswrapper[4984]: I0130 10:45:59.606480 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" probeResult="failure" output=< Jan 30 10:45:59 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:45:59 crc kubenswrapper[4984]: > Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.001353 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.002154 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.002281 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.003942 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.004066 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e" gracePeriod=600 Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.265284 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e" exitCode=0 Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.265362 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e"} Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.265685 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:46:04 crc kubenswrapper[4984]: I0130 10:46:04.276812 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988"} Jan 30 10:46:09 crc kubenswrapper[4984]: I0130 10:46:09.591782 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" probeResult="failure" output=< Jan 30 10:46:09 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:46:09 crc kubenswrapper[4984]: > Jan 30 10:46:18 crc kubenswrapper[4984]: I0130 10:46:18.624382 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:46:18 crc kubenswrapper[4984]: I0130 10:46:18.673806 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:46:18 crc kubenswrapper[4984]: I0130 10:46:18.860531 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:46:20 crc kubenswrapper[4984]: I0130 10:46:20.433985 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" containerID="cri-o://0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" gracePeriod=2 Jan 30 10:46:20 crc kubenswrapper[4984]: I0130 10:46:20.986823 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.048205 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") pod \"85f0471c-9b7e-4545-8550-08db9fa38fed\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.048311 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") pod \"85f0471c-9b7e-4545-8550-08db9fa38fed\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.048335 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") pod \"85f0471c-9b7e-4545-8550-08db9fa38fed\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.049368 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities" (OuterVolumeSpecName: "utilities") pod "85f0471c-9b7e-4545-8550-08db9fa38fed" (UID: "85f0471c-9b7e-4545-8550-08db9fa38fed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.059364 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56" (OuterVolumeSpecName: "kube-api-access-sbh56") pod "85f0471c-9b7e-4545-8550-08db9fa38fed" (UID: "85f0471c-9b7e-4545-8550-08db9fa38fed"). InnerVolumeSpecName "kube-api-access-sbh56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.150579 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.150613 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.165653 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85f0471c-9b7e-4545-8550-08db9fa38fed" (UID: "85f0471c-9b7e-4545-8550-08db9fa38fed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.251611 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446414 4984 generic.go:334] "Generic (PLEG): container finished" podID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerID="0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" exitCode=0 Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446796 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446848 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerDied","Data":"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9"} Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446891 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerDied","Data":"24a1148d15bee715c556a07badff1472f1bb6f79211e4948aa32e4f198a42f23"} Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446919 4984 scope.go:117] "RemoveContainer" containerID="0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.480944 4984 scope.go:117] "RemoveContainer" containerID="42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.496206 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.502958 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.511507 4984 scope.go:117] "RemoveContainer" containerID="fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.561863 4984 scope.go:117] "RemoveContainer" containerID="0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" Jan 30 10:46:21 crc kubenswrapper[4984]: E0130 10:46:21.562464 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9\": container with ID starting with 0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9 not found: ID does not exist" containerID="0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.562504 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9"} err="failed to get container status \"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9\": rpc error: code = NotFound desc = could not find container \"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9\": container with ID starting with 0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9 not found: ID does not exist" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.562557 4984 scope.go:117] "RemoveContainer" containerID="42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa" Jan 30 10:46:21 crc kubenswrapper[4984]: E0130 10:46:21.563293 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa\": container with ID starting with 42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa not found: ID does not exist" containerID="42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.563353 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa"} err="failed to get container status \"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa\": rpc error: code = NotFound desc = could not find container \"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa\": container with ID starting with 42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa not found: ID does not exist" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.563392 4984 scope.go:117] "RemoveContainer" containerID="fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f" Jan 30 10:46:21 crc kubenswrapper[4984]: E0130 10:46:21.563830 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f\": container with ID starting with fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f not found: ID does not exist" containerID="fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.563864 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f"} err="failed to get container status \"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f\": rpc error: code = NotFound desc = could not find container \"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f\": container with ID starting with fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f not found: ID does not exist" Jan 30 10:46:22 crc kubenswrapper[4984]: I0130 10:46:22.102659 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" path="/var/lib/kubelet/pods/85f0471c-9b7e-4545-8550-08db9fa38fed/volumes" Jan 30 10:46:47 crc kubenswrapper[4984]: I0130 10:46:47.735337 4984 generic.go:334] "Generic (PLEG): container finished" podID="2f986324-c570-4c65-aed1-952aa2538af8" containerID="3410426cef37882e6e66b81f19bf117fcd08a6957b5586575d68c7a3a2e02ae8" exitCode=0 Jan 30 10:46:47 crc kubenswrapper[4984]: I0130 10:46:47.735442 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" event={"ID":"2f986324-c570-4c65-aed1-952aa2538af8","Type":"ContainerDied","Data":"3410426cef37882e6e66b81f19bf117fcd08a6957b5586575d68c7a3a2e02ae8"} Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.224372 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324756 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324871 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324893 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324951 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324975 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.330805 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.330886 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69" (OuterVolumeSpecName: "kube-api-access-rkm69") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "kube-api-access-rkm69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.348773 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.351040 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory" (OuterVolumeSpecName: "inventory") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.355824 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426586 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426618 4984 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426629 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426639 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426648 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.760633 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" event={"ID":"2f986324-c570-4c65-aed1-952aa2538af8","Type":"ContainerDied","Data":"5441f0d6bae31f22da1ba983066bd904726100824475ad7b233ba6ccd9255c43"} Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.760682 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5441f0d6bae31f22da1ba983066bd904726100824475ad7b233ba6ccd9255c43" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.760732 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.858500 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp"] Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859393 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859424 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859444 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859454 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859469 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f986324-c570-4c65-aed1-952aa2538af8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859477 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f986324-c570-4c65-aed1-952aa2538af8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859490 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859498 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859512 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859520 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859537 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859544 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859569 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859578 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859595 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859603 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859622 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859630 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859644 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859651 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859898 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859929 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f986324-c570-4c65-aed1-952aa2538af8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859949 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859969 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.860941 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.863769 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.863927 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.865130 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.865396 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.865778 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.870416 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp"] Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.872357 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.934933 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935018 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935043 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935145 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935175 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935229 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037021 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037086 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037159 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037216 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037285 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037316 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.042284 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.043047 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.043260 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.044192 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.046183 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.061891 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.184783 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.706731 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp"] Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.768045 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" event={"ID":"4549607f-18ca-42e1-8c2b-b7d9793e2005","Type":"ContainerStarted","Data":"26f2602681c8c492d0051c7255d16410438290d8d1dcf9880aa2a552444af96b"} Jan 30 10:46:51 crc kubenswrapper[4984]: I0130 10:46:51.781028 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" event={"ID":"4549607f-18ca-42e1-8c2b-b7d9793e2005","Type":"ContainerStarted","Data":"a2962b323fced6ad8eb01c4f67039d08724bb7e095562875d9745776cc23a5d0"} Jan 30 10:46:51 crc kubenswrapper[4984]: I0130 10:46:51.805202 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" podStartSLOduration=2.332331899 podStartE2EDuration="2.805181762s" podCreationTimestamp="2026-01-30 10:46:49 +0000 UTC" firstStartedPulling="2026-01-30 10:46:50.718623331 +0000 UTC m=+2115.284927155" lastFinishedPulling="2026-01-30 10:46:51.191473204 +0000 UTC m=+2115.757777018" observedRunningTime="2026-01-30 10:46:51.796759515 +0000 UTC m=+2116.363063339" watchObservedRunningTime="2026-01-30 10:46:51.805181762 +0000 UTC m=+2116.371485606" Jan 30 10:47:37 crc kubenswrapper[4984]: I0130 10:47:37.523094 4984 generic.go:334] "Generic (PLEG): container finished" podID="4549607f-18ca-42e1-8c2b-b7d9793e2005" containerID="a2962b323fced6ad8eb01c4f67039d08724bb7e095562875d9745776cc23a5d0" exitCode=0 Jan 30 10:47:37 crc kubenswrapper[4984]: I0130 10:47:37.523188 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" event={"ID":"4549607f-18ca-42e1-8c2b-b7d9793e2005","Type":"ContainerDied","Data":"a2962b323fced6ad8eb01c4f67039d08724bb7e095562875d9745776cc23a5d0"} Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.021944 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.166730 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.166808 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.166926 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.175668 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.166946 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.181532 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.181712 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.182966 4984 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.187692 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7" (OuterVolumeSpecName: "kube-api-access-wxdg7") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "kube-api-access-wxdg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.206842 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.210581 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory" (OuterVolumeSpecName: "inventory") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.212414 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.228425 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.284748 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.284970 4984 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.285079 4984 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.285144 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.285198 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.554990 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" event={"ID":"4549607f-18ca-42e1-8c2b-b7d9793e2005","Type":"ContainerDied","Data":"26f2602681c8c492d0051c7255d16410438290d8d1dcf9880aa2a552444af96b"} Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.555049 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f2602681c8c492d0051c7255d16410438290d8d1dcf9880aa2a552444af96b" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.558421 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.733958 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm"] Jan 30 10:47:39 crc kubenswrapper[4984]: E0130 10:47:39.734345 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4549607f-18ca-42e1-8c2b-b7d9793e2005" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.734362 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4549607f-18ca-42e1-8c2b-b7d9793e2005" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.734520 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4549607f-18ca-42e1-8c2b-b7d9793e2005" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.735097 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.737743 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.738019 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.738796 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.740538 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.752788 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.752806 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm"] Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.804554 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.804669 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.804814 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.804849 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.805033 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.906983 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.907039 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.907087 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.907117 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.907157 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.912990 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.913635 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.913728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.913808 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.931345 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:40 crc kubenswrapper[4984]: I0130 10:47:40.124699 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:40 crc kubenswrapper[4984]: I0130 10:47:40.675317 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm"] Jan 30 10:47:41 crc kubenswrapper[4984]: I0130 10:47:41.576583 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" event={"ID":"d3ca7cba-514d-4761-821d-9b48578f0cc3","Type":"ContainerStarted","Data":"e904d384dbc793ea32f3f7021ee588e47446adb38a277209a9e7e2205814ae72"} Jan 30 10:47:41 crc kubenswrapper[4984]: I0130 10:47:41.576921 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" event={"ID":"d3ca7cba-514d-4761-821d-9b48578f0cc3","Type":"ContainerStarted","Data":"19bcb4c8d3b2a671dd50be23d3162eaa64c2878736d76cdf8b28701a759f6bf0"} Jan 30 10:47:41 crc kubenswrapper[4984]: I0130 10:47:41.604337 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" podStartSLOduration=2.053288674 podStartE2EDuration="2.60431489s" podCreationTimestamp="2026-01-30 10:47:39 +0000 UTC" firstStartedPulling="2026-01-30 10:47:40.681543017 +0000 UTC m=+2165.247846841" lastFinishedPulling="2026-01-30 10:47:41.232569223 +0000 UTC m=+2165.798873057" observedRunningTime="2026-01-30 10:47:41.599594152 +0000 UTC m=+2166.165898006" watchObservedRunningTime="2026-01-30 10:47:41.60431489 +0000 UTC m=+2166.170618724" Jan 30 10:48:03 crc kubenswrapper[4984]: I0130 10:48:03.001200 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:48:03 crc kubenswrapper[4984]: I0130 10:48:03.001891 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.584629 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.588790 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.598422 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.703932 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.704002 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.704165 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.805265 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.805335 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.805388 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.806108 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.806233 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.833426 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.911096 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:16 crc kubenswrapper[4984]: I0130 10:48:16.366278 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:16 crc kubenswrapper[4984]: W0130 10:48:16.372484 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7365f2b3_2916_4c3b_8ce8_d34b7b45bcbb.slice/crio-463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc WatchSource:0}: Error finding container 463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc: Status 404 returned error can't find the container with id 463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc Jan 30 10:48:16 crc kubenswrapper[4984]: I0130 10:48:16.968856 4984 generic.go:334] "Generic (PLEG): container finished" podID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerID="8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820" exitCode=0 Jan 30 10:48:16 crc kubenswrapper[4984]: I0130 10:48:16.968928 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerDied","Data":"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820"} Jan 30 10:48:16 crc kubenswrapper[4984]: I0130 10:48:16.969224 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerStarted","Data":"463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc"} Jan 30 10:48:17 crc kubenswrapper[4984]: I0130 10:48:17.983814 4984 generic.go:334] "Generic (PLEG): container finished" podID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerID="d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc" exitCode=0 Jan 30 10:48:17 crc kubenswrapper[4984]: I0130 10:48:17.983893 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerDied","Data":"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc"} Jan 30 10:48:18 crc kubenswrapper[4984]: I0130 10:48:18.996376 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerStarted","Data":"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56"} Jan 30 10:48:19 crc kubenswrapper[4984]: I0130 10:48:19.025699 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fbxpr" podStartSLOduration=2.582496624 podStartE2EDuration="4.025669924s" podCreationTimestamp="2026-01-30 10:48:15 +0000 UTC" firstStartedPulling="2026-01-30 10:48:16.972644346 +0000 UTC m=+2201.538948200" lastFinishedPulling="2026-01-30 10:48:18.415817636 +0000 UTC m=+2202.982121500" observedRunningTime="2026-01-30 10:48:19.01925161 +0000 UTC m=+2203.585555444" watchObservedRunningTime="2026-01-30 10:48:19.025669924 +0000 UTC m=+2203.591973758" Jan 30 10:48:25 crc kubenswrapper[4984]: I0130 10:48:25.911370 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:25 crc kubenswrapper[4984]: I0130 10:48:25.913200 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:25 crc kubenswrapper[4984]: I0130 10:48:25.962523 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:26 crc kubenswrapper[4984]: I0130 10:48:26.119687 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:26 crc kubenswrapper[4984]: I0130 10:48:26.194277 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.095827 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fbxpr" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="registry-server" containerID="cri-o://0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" gracePeriod=2 Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.736043 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.877941 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") pod \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.878062 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") pod \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.878315 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") pod \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.883665 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities" (OuterVolumeSpecName: "utilities") pod "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" (UID: "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.885045 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2" (OuterVolumeSpecName: "kube-api-access-h8rj2") pod "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" (UID: "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb"). InnerVolumeSpecName "kube-api-access-h8rj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.906733 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" (UID: "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.980671 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.980706 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") on node \"crc\" DevicePath \"\"" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.980719 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.112634 4984 generic.go:334] "Generic (PLEG): container finished" podID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerID="0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" exitCode=0 Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.112700 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerDied","Data":"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56"} Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.112727 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerDied","Data":"463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc"} Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.112745 4984 scope.go:117] "RemoveContainer" containerID="0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.113050 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.149778 4984 scope.go:117] "RemoveContainer" containerID="d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.161616 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.170623 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.178698 4984 scope.go:117] "RemoveContainer" containerID="8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.235864 4984 scope.go:117] "RemoveContainer" containerID="0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" Jan 30 10:48:29 crc kubenswrapper[4984]: E0130 10:48:29.236475 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56\": container with ID starting with 0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56 not found: ID does not exist" containerID="0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.236528 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56"} err="failed to get container status \"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56\": rpc error: code = NotFound desc = could not find container \"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56\": container with ID starting with 0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56 not found: ID does not exist" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.236562 4984 scope.go:117] "RemoveContainer" containerID="d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc" Jan 30 10:48:29 crc kubenswrapper[4984]: E0130 10:48:29.236993 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc\": container with ID starting with d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc not found: ID does not exist" containerID="d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.237039 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc"} err="failed to get container status \"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc\": rpc error: code = NotFound desc = could not find container \"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc\": container with ID starting with d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc not found: ID does not exist" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.237066 4984 scope.go:117] "RemoveContainer" containerID="8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820" Jan 30 10:48:29 crc kubenswrapper[4984]: E0130 10:48:29.237433 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820\": container with ID starting with 8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820 not found: ID does not exist" containerID="8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.237474 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820"} err="failed to get container status \"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820\": rpc error: code = NotFound desc = could not find container \"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820\": container with ID starting with 8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820 not found: ID does not exist" Jan 30 10:48:30 crc kubenswrapper[4984]: I0130 10:48:30.111807 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" path="/var/lib/kubelet/pods/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb/volumes" Jan 30 10:48:33 crc kubenswrapper[4984]: I0130 10:48:33.000321 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:48:33 crc kubenswrapper[4984]: I0130 10:48:33.000650 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.001243 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.002039 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.002107 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.003044 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.003145 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" gracePeriod=600 Jan 30 10:49:04 crc kubenswrapper[4984]: E0130 10:49:04.251536 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:04 crc kubenswrapper[4984]: I0130 10:49:04.483410 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" exitCode=0 Jan 30 10:49:04 crc kubenswrapper[4984]: I0130 10:49:04.483499 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988"} Jan 30 10:49:04 crc kubenswrapper[4984]: I0130 10:49:04.483913 4984 scope.go:117] "RemoveContainer" containerID="d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e" Jan 30 10:49:04 crc kubenswrapper[4984]: I0130 10:49:04.484612 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:04 crc kubenswrapper[4984]: E0130 10:49:04.485013 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:18 crc kubenswrapper[4984]: I0130 10:49:18.090528 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:18 crc kubenswrapper[4984]: E0130 10:49:18.091529 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:30 crc kubenswrapper[4984]: I0130 10:49:30.090407 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:30 crc kubenswrapper[4984]: E0130 10:49:30.092550 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:42 crc kubenswrapper[4984]: I0130 10:49:42.090641 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:42 crc kubenswrapper[4984]: E0130 10:49:42.092560 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:53 crc kubenswrapper[4984]: I0130 10:49:53.090430 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:53 crc kubenswrapper[4984]: E0130 10:49:53.091502 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:04 crc kubenswrapper[4984]: I0130 10:50:04.090805 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:04 crc kubenswrapper[4984]: E0130 10:50:04.091594 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:18 crc kubenswrapper[4984]: I0130 10:50:18.090926 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:18 crc kubenswrapper[4984]: E0130 10:50:18.091888 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:33 crc kubenswrapper[4984]: I0130 10:50:33.090570 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:33 crc kubenswrapper[4984]: E0130 10:50:33.091608 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:44 crc kubenswrapper[4984]: I0130 10:50:44.090389 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:44 crc kubenswrapper[4984]: E0130 10:50:44.091329 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:58 crc kubenswrapper[4984]: I0130 10:50:58.091392 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:58 crc kubenswrapper[4984]: E0130 10:50:58.092900 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:51:09 crc kubenswrapper[4984]: I0130 10:51:09.090861 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:51:09 crc kubenswrapper[4984]: E0130 10:51:09.092403 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:51:24 crc kubenswrapper[4984]: I0130 10:51:24.090446 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:51:24 crc kubenswrapper[4984]: E0130 10:51:24.093552 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:51:29 crc kubenswrapper[4984]: I0130 10:51:29.960391 4984 generic.go:334] "Generic (PLEG): container finished" podID="d3ca7cba-514d-4761-821d-9b48578f0cc3" containerID="e904d384dbc793ea32f3f7021ee588e47446adb38a277209a9e7e2205814ae72" exitCode=0 Jan 30 10:51:29 crc kubenswrapper[4984]: I0130 10:51:29.960494 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" event={"ID":"d3ca7cba-514d-4761-821d-9b48578f0cc3","Type":"ContainerDied","Data":"e904d384dbc793ea32f3f7021ee588e47446adb38a277209a9e7e2205814ae72"} Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.432690 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530278 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530352 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530502 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530538 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530631 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.979679 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" event={"ID":"d3ca7cba-514d-4761-821d-9b48578f0cc3","Type":"ContainerDied","Data":"19bcb4c8d3b2a671dd50be23d3162eaa64c2878736d76cdf8b28701a759f6bf0"} Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.980138 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19bcb4c8d3b2a671dd50be23d3162eaa64c2878736d76cdf8b28701a759f6bf0" Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.980292 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.117631 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm"] Jan 30 10:51:32 crc kubenswrapper[4984]: E0130 10:51:32.118183 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ca7cba-514d-4761-821d-9b48578f0cc3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118204 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ca7cba-514d-4761-821d-9b48578f0cc3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 10:51:32 crc kubenswrapper[4984]: E0130 10:51:32.118212 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="registry-server" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118219 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="registry-server" Jan 30 10:51:32 crc kubenswrapper[4984]: E0130 10:51:32.118232 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="extract-content" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118238 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="extract-content" Jan 30 10:51:32 crc kubenswrapper[4984]: E0130 10:51:32.118269 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="extract-utilities" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118278 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="extract-utilities" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118502 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="registry-server" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118531 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ca7cba-514d-4761-821d-9b48578f0cc3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.119295 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm"] Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.119383 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.122435 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.122640 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.122951 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.248775 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.249900 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm" (OuterVolumeSpecName: "kube-api-access-jcvvm") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "kube-api-access-jcvvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254170 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254380 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254726 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254791 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254831 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254875 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254946 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254974 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255010 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255063 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255143 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255212 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255227 4984 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: W0130 10:51:32.255337 4984 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d3ca7cba-514d-4761-821d-9b48578f0cc3/volumes/kubernetes.io~secret/ssh-key-openstack-edpm-ipam Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255347 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.258425 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory" (OuterVolumeSpecName: "inventory") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.258949 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.356916 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.356976 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357033 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357053 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357081 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357121 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357177 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357199 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357232 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357314 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357327 4984 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357335 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.358888 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.361281 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.361544 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.362212 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.362373 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.362784 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.363444 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.364939 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.389057 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.435669 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:33 crc kubenswrapper[4984]: I0130 10:51:33.007425 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm"] Jan 30 10:51:33 crc kubenswrapper[4984]: I0130 10:51:33.017706 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:51:34 crc kubenswrapper[4984]: I0130 10:51:34.001844 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" event={"ID":"eaa18315-192f-412f-b94c-708c98209a5a","Type":"ContainerStarted","Data":"1e7b15a8490d181c1436d2dc6adcf4659c7b4edd12e93391658f7ee8d52a9a57"} Jan 30 10:51:34 crc kubenswrapper[4984]: I0130 10:51:34.002478 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" event={"ID":"eaa18315-192f-412f-b94c-708c98209a5a","Type":"ContainerStarted","Data":"8832c5c57003890662a0f9615f11ddf277cbd88679f15ae2546fd3edafc4bdd9"} Jan 30 10:51:34 crc kubenswrapper[4984]: I0130 10:51:34.049553 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" podStartSLOduration=1.6454343649999998 podStartE2EDuration="2.049526306s" podCreationTimestamp="2026-01-30 10:51:32 +0000 UTC" firstStartedPulling="2026-01-30 10:51:33.017384394 +0000 UTC m=+2397.583688238" lastFinishedPulling="2026-01-30 10:51:33.421476355 +0000 UTC m=+2397.987780179" observedRunningTime="2026-01-30 10:51:34.030938313 +0000 UTC m=+2398.597242137" watchObservedRunningTime="2026-01-30 10:51:34.049526306 +0000 UTC m=+2398.615830160" Jan 30 10:51:38 crc kubenswrapper[4984]: I0130 10:51:38.091645 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:51:38 crc kubenswrapper[4984]: E0130 10:51:38.092557 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:51:49 crc kubenswrapper[4984]: I0130 10:51:49.091708 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:51:49 crc kubenswrapper[4984]: E0130 10:51:49.092885 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:03 crc kubenswrapper[4984]: I0130 10:52:03.090022 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:03 crc kubenswrapper[4984]: E0130 10:52:03.091168 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:17 crc kubenswrapper[4984]: I0130 10:52:17.091320 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:17 crc kubenswrapper[4984]: E0130 10:52:17.092235 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:23 crc kubenswrapper[4984]: I0130 10:52:23.810145 4984 scope.go:117] "RemoveContainer" containerID="de219b46efc681590dfc9f6c663921083e34944cc19d08e21c367c0cf53ca7e4" Jan 30 10:52:23 crc kubenswrapper[4984]: I0130 10:52:23.839918 4984 scope.go:117] "RemoveContainer" containerID="54816c0cf5b8eb3e710c434a7440b8072cfb0783b73c9b74be19869c4c444e35" Jan 30 10:52:23 crc kubenswrapper[4984]: I0130 10:52:23.882207 4984 scope.go:117] "RemoveContainer" containerID="ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7" Jan 30 10:52:31 crc kubenswrapper[4984]: I0130 10:52:31.090397 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:31 crc kubenswrapper[4984]: E0130 10:52:31.091061 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:46 crc kubenswrapper[4984]: I0130 10:52:46.096429 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:46 crc kubenswrapper[4984]: E0130 10:52:46.097418 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:57 crc kubenswrapper[4984]: I0130 10:52:57.090579 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:57 crc kubenswrapper[4984]: E0130 10:52:57.091824 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:53:09 crc kubenswrapper[4984]: I0130 10:53:09.090214 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:53:09 crc kubenswrapper[4984]: E0130 10:53:09.091046 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:53:20 crc kubenswrapper[4984]: I0130 10:53:20.090315 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:53:20 crc kubenswrapper[4984]: E0130 10:53:20.091296 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:53:33 crc kubenswrapper[4984]: I0130 10:53:33.090699 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:53:33 crc kubenswrapper[4984]: E0130 10:53:33.091889 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:53:42 crc kubenswrapper[4984]: I0130 10:53:42.236407 4984 generic.go:334] "Generic (PLEG): container finished" podID="eaa18315-192f-412f-b94c-708c98209a5a" containerID="1e7b15a8490d181c1436d2dc6adcf4659c7b4edd12e93391658f7ee8d52a9a57" exitCode=0 Jan 30 10:53:42 crc kubenswrapper[4984]: I0130 10:53:42.236513 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" event={"ID":"eaa18315-192f-412f-b94c-708c98209a5a","Type":"ContainerDied","Data":"1e7b15a8490d181c1436d2dc6adcf4659c7b4edd12e93391658f7ee8d52a9a57"} Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.748744 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883260 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883375 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883421 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883471 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883507 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883536 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883581 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883640 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.889885 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.892141 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q" (OuterVolumeSpecName: "kube-api-access-5tj4q") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "kube-api-access-5tj4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.916128 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.917452 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.919436 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.919561 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.919964 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.922886 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.923477 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory" (OuterVolumeSpecName: "inventory") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986183 4984 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986220 4984 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986286 4984 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986299 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986309 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986317 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986324 4984 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986332 4984 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986360 4984 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.261082 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" event={"ID":"eaa18315-192f-412f-b94c-708c98209a5a","Type":"ContainerDied","Data":"8832c5c57003890662a0f9615f11ddf277cbd88679f15ae2546fd3edafc4bdd9"} Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.261116 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.261121 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8832c5c57003890662a0f9615f11ddf277cbd88679f15ae2546fd3edafc4bdd9" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.358053 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf"] Jan 30 10:53:44 crc kubenswrapper[4984]: E0130 10:53:44.358515 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa18315-192f-412f-b94c-708c98209a5a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.358537 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa18315-192f-412f-b94c-708c98209a5a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.358767 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa18315-192f-412f-b94c-708c98209a5a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.359508 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.367629 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.367723 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.367851 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.367963 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.369465 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.378350 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf"] Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.504936 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505301 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505348 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505373 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505400 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505421 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505441 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.606911 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.606972 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607153 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607208 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607266 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607304 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607335 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.612728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.612848 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.613532 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.614992 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.615766 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.617224 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.624878 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.681059 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:45 crc kubenswrapper[4984]: I0130 10:53:45.261646 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf"] Jan 30 10:53:46 crc kubenswrapper[4984]: I0130 10:53:46.281404 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" event={"ID":"2498ca77-0e58-4af1-b59d-c19e6b11f2f9","Type":"ContainerStarted","Data":"1ce026c7d9e7865804b002636285604d59f66b4508d2e8577f360f5cb9aa549b"} Jan 30 10:53:46 crc kubenswrapper[4984]: I0130 10:53:46.281667 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" event={"ID":"2498ca77-0e58-4af1-b59d-c19e6b11f2f9","Type":"ContainerStarted","Data":"49e3d33584fba571e1d643382a9e95c4634c1598a3ef19de78845ca7b6eae51f"} Jan 30 10:53:46 crc kubenswrapper[4984]: I0130 10:53:46.299178 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" podStartSLOduration=1.882130853 podStartE2EDuration="2.299160788s" podCreationTimestamp="2026-01-30 10:53:44 +0000 UTC" firstStartedPulling="2026-01-30 10:53:45.265853562 +0000 UTC m=+2529.832157416" lastFinishedPulling="2026-01-30 10:53:45.682883507 +0000 UTC m=+2530.249187351" observedRunningTime="2026-01-30 10:53:46.296973689 +0000 UTC m=+2530.863277523" watchObservedRunningTime="2026-01-30 10:53:46.299160788 +0000 UTC m=+2530.865464612" Jan 30 10:53:48 crc kubenswrapper[4984]: I0130 10:53:48.092218 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:53:48 crc kubenswrapper[4984]: E0130 10:53:48.093400 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:54:02 crc kubenswrapper[4984]: I0130 10:54:02.090308 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:54:02 crc kubenswrapper[4984]: E0130 10:54:02.091226 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:54:13 crc kubenswrapper[4984]: I0130 10:54:13.091001 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:54:13 crc kubenswrapper[4984]: I0130 10:54:13.521661 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206"} Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.303519 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.307759 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.327725 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.404158 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.404426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.404507 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.506814 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.506910 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.507030 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.507416 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.507638 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.528388 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.675514 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:31 crc kubenswrapper[4984]: I0130 10:55:31.131150 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:31 crc kubenswrapper[4984]: I0130 10:55:31.207643 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerStarted","Data":"c2fca6339fb0afef8e04fdf5437a00151a74afbf727cdbcb851c0730f8e653c5"} Jan 30 10:55:32 crc kubenswrapper[4984]: I0130 10:55:32.221301 4984 generic.go:334] "Generic (PLEG): container finished" podID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerID="9b33cd5d0cb8b2dfee6f54825b893734780a83e181b35cae89986d070ac3193d" exitCode=0 Jan 30 10:55:32 crc kubenswrapper[4984]: I0130 10:55:32.221448 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerDied","Data":"9b33cd5d0cb8b2dfee6f54825b893734780a83e181b35cae89986d070ac3193d"} Jan 30 10:55:35 crc kubenswrapper[4984]: I0130 10:55:35.253644 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerStarted","Data":"587c0698cb0e88f277da1829fdf6755b9ad4182e6fb705b36e85e1de56d38add"} Jan 30 10:55:36 crc kubenswrapper[4984]: I0130 10:55:36.268408 4984 generic.go:334] "Generic (PLEG): container finished" podID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerID="587c0698cb0e88f277da1829fdf6755b9ad4182e6fb705b36e85e1de56d38add" exitCode=0 Jan 30 10:55:36 crc kubenswrapper[4984]: I0130 10:55:36.268479 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerDied","Data":"587c0698cb0e88f277da1829fdf6755b9ad4182e6fb705b36e85e1de56d38add"} Jan 30 10:55:38 crc kubenswrapper[4984]: I0130 10:55:38.291899 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerStarted","Data":"efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b"} Jan 30 10:55:38 crc kubenswrapper[4984]: I0130 10:55:38.341303 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vpdjr" podStartSLOduration=3.394633307 podStartE2EDuration="8.341218458s" podCreationTimestamp="2026-01-30 10:55:30 +0000 UTC" firstStartedPulling="2026-01-30 10:55:32.223163923 +0000 UTC m=+2636.789467757" lastFinishedPulling="2026-01-30 10:55:37.169749044 +0000 UTC m=+2641.736052908" observedRunningTime="2026-01-30 10:55:38.335347939 +0000 UTC m=+2642.901651813" watchObservedRunningTime="2026-01-30 10:55:38.341218458 +0000 UTC m=+2642.907522312" Jan 30 10:55:40 crc kubenswrapper[4984]: I0130 10:55:40.676393 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:40 crc kubenswrapper[4984]: I0130 10:55:40.676991 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:41 crc kubenswrapper[4984]: I0130 10:55:41.725592 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vpdjr" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:55:41 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:55:41 crc kubenswrapper[4984]: > Jan 30 10:55:50 crc kubenswrapper[4984]: I0130 10:55:50.734626 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:50 crc kubenswrapper[4984]: I0130 10:55:50.802555 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:50 crc kubenswrapper[4984]: I0130 10:55:50.984785 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:52 crc kubenswrapper[4984]: I0130 10:55:52.422171 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vpdjr" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" containerID="cri-o://efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b" gracePeriod=2 Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.445110 4984 generic.go:334] "Generic (PLEG): container finished" podID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerID="efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b" exitCode=0 Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.445182 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerDied","Data":"efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b"} Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.809470 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.904080 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") pod \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.904357 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") pod \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.904424 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") pod \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.905484 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities" (OuterVolumeSpecName: "utilities") pod "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" (UID: "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.906134 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.910287 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f" (OuterVolumeSpecName: "kube-api-access-lrb4f") pod "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" (UID: "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a"). InnerVolumeSpecName "kube-api-access-lrb4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.008022 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") on node \"crc\" DevicePath \"\"" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.035012 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" (UID: "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.110402 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.460478 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerDied","Data":"c2fca6339fb0afef8e04fdf5437a00151a74afbf727cdbcb851c0730f8e653c5"} Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.460542 4984 scope.go:117] "RemoveContainer" containerID="efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.460593 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.490428 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.499300 4984 scope.go:117] "RemoveContainer" containerID="587c0698cb0e88f277da1829fdf6755b9ad4182e6fb705b36e85e1de56d38add" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.500458 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.522610 4984 scope.go:117] "RemoveContainer" containerID="9b33cd5d0cb8b2dfee6f54825b893734780a83e181b35cae89986d070ac3193d" Jan 30 10:55:56 crc kubenswrapper[4984]: I0130 10:55:56.104786 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" path="/var/lib/kubelet/pods/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a/volumes" Jan 30 10:56:03 crc kubenswrapper[4984]: I0130 10:56:03.541709 4984 generic.go:334] "Generic (PLEG): container finished" podID="2498ca77-0e58-4af1-b59d-c19e6b11f2f9" containerID="1ce026c7d9e7865804b002636285604d59f66b4508d2e8577f360f5cb9aa549b" exitCode=0 Jan 30 10:56:03 crc kubenswrapper[4984]: I0130 10:56:03.541770 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" event={"ID":"2498ca77-0e58-4af1-b59d-c19e6b11f2f9","Type":"ContainerDied","Data":"1ce026c7d9e7865804b002636285604d59f66b4508d2e8577f360f5cb9aa549b"} Jan 30 10:56:04 crc kubenswrapper[4984]: I0130 10:56:04.966541 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.033410 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.033533 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.033599 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.034827 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.035100 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.035235 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.035422 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.040461 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.050766 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g" (OuterVolumeSpecName: "kube-api-access-wlf9g") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "kube-api-access-wlf9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.065233 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.067908 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.068345 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory" (OuterVolumeSpecName: "inventory") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.070183 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.077772 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.137958 4984 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138009 4984 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138021 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138031 4984 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138039 4984 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138072 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138081 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.563206 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" event={"ID":"2498ca77-0e58-4af1-b59d-c19e6b11f2f9","Type":"ContainerDied","Data":"49e3d33584fba571e1d643382a9e95c4634c1598a3ef19de78845ca7b6eae51f"} Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.563265 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e3d33584fba571e1d643382a9e95c4634c1598a3ef19de78845ca7b6eae51f" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.563340 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:56:33 crc kubenswrapper[4984]: I0130 10:56:33.001353 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:56:33 crc kubenswrapper[4984]: I0130 10:56:33.001924 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.708372 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 10:56:58 crc kubenswrapper[4984]: E0130 10:56:58.709307 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="extract-utilities" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709327 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="extract-utilities" Jan 30 10:56:58 crc kubenswrapper[4984]: E0130 10:56:58.709344 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="extract-content" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709352 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="extract-content" Jan 30 10:56:58 crc kubenswrapper[4984]: E0130 10:56:58.709440 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2498ca77-0e58-4af1-b59d-c19e6b11f2f9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709453 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2498ca77-0e58-4af1-b59d-c19e6b11f2f9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 10:56:58 crc kubenswrapper[4984]: E0130 10:56:58.709481 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709489 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709727 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709759 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2498ca77-0e58-4af1-b59d-c19e6b11f2f9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.710645 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.713382 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-68tn4" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.713881 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.714576 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.718700 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.721066 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.797764 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.797820 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.797984 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899278 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899330 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899352 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899404 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899455 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899554 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899581 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899606 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899622 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.900837 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.902504 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.905896 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001328 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001394 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001415 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001486 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001515 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001563 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001972 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.002285 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.002790 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.005674 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.011988 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.034745 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.042502 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.082168 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.514836 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.529471 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.897729 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.901594 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.914203 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.026621 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.026704 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.027614 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.126239 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2281d2df-38c2-4c96-bff0-09cf745f1e50","Type":"ContainerStarted","Data":"b6e20c129e5f1a30f1d5e8bbe28d03846430b2c36243a804176ef658d344f75a"} Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.129445 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.129521 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.129619 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.130029 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.130104 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.154083 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.297889 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.839181 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:00 crc kubenswrapper[4984]: W0130 10:57:00.851227 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3449a677_2462_4a6a_9855_f07157020548.slice/crio-7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb WatchSource:0}: Error finding container 7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb: Status 404 returned error can't find the container with id 7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb Jan 30 10:57:01 crc kubenswrapper[4984]: I0130 10:57:01.139909 4984 generic.go:334] "Generic (PLEG): container finished" podID="3449a677-2462-4a6a-9855-f07157020548" containerID="3bf5c8fde7d23cb63d4a5f2848e4706d5178b2204f74758b2c884ed0d6f76898" exitCode=0 Jan 30 10:57:01 crc kubenswrapper[4984]: I0130 10:57:01.140195 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerDied","Data":"3bf5c8fde7d23cb63d4a5f2848e4706d5178b2204f74758b2c884ed0d6f76898"} Jan 30 10:57:01 crc kubenswrapper[4984]: I0130 10:57:01.140225 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerStarted","Data":"7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb"} Jan 30 10:57:03 crc kubenswrapper[4984]: I0130 10:57:03.000320 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:57:03 crc kubenswrapper[4984]: I0130 10:57:03.000969 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:57:06 crc kubenswrapper[4984]: I0130 10:57:06.194098 4984 generic.go:334] "Generic (PLEG): container finished" podID="3449a677-2462-4a6a-9855-f07157020548" containerID="b76e60f6701192f196bc34da85eb2ca09a381fee06a5897ddbdd060f1efd8391" exitCode=0 Jan 30 10:57:06 crc kubenswrapper[4984]: I0130 10:57:06.194173 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerDied","Data":"b76e60f6701192f196bc34da85eb2ca09a381fee06a5897ddbdd060f1efd8391"} Jan 30 10:57:08 crc kubenswrapper[4984]: I0130 10:57:08.237640 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerStarted","Data":"d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a"} Jan 30 10:57:08 crc kubenswrapper[4984]: I0130 10:57:08.260229 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wmx8m" podStartSLOduration=3.470057882 podStartE2EDuration="9.260192657s" podCreationTimestamp="2026-01-30 10:56:59 +0000 UTC" firstStartedPulling="2026-01-30 10:57:01.141609723 +0000 UTC m=+2725.707913547" lastFinishedPulling="2026-01-30 10:57:06.931744498 +0000 UTC m=+2731.498048322" observedRunningTime="2026-01-30 10:57:08.254443382 +0000 UTC m=+2732.820747246" watchObservedRunningTime="2026-01-30 10:57:08.260192657 +0000 UTC m=+2732.826496481" Jan 30 10:57:10 crc kubenswrapper[4984]: I0130 10:57:10.298913 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:10 crc kubenswrapper[4984]: I0130 10:57:10.299020 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:10 crc kubenswrapper[4984]: I0130 10:57:10.360841 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:20 crc kubenswrapper[4984]: I0130 10:57:20.347706 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:20 crc kubenswrapper[4984]: I0130 10:57:20.401849 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:21 crc kubenswrapper[4984]: I0130 10:57:21.356382 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wmx8m" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" containerID="cri-o://d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" gracePeriod=2 Jan 30 10:57:22 crc kubenswrapper[4984]: I0130 10:57:22.367379 4984 generic.go:334] "Generic (PLEG): container finished" podID="3449a677-2462-4a6a-9855-f07157020548" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" exitCode=0 Jan 30 10:57:22 crc kubenswrapper[4984]: I0130 10:57:22.367466 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerDied","Data":"d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a"} Jan 30 10:57:30 crc kubenswrapper[4984]: E0130 10:57:30.299464 4984 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a is running failed: container process not found" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 10:57:30 crc kubenswrapper[4984]: E0130 10:57:30.300350 4984 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a is running failed: container process not found" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 10:57:30 crc kubenswrapper[4984]: E0130 10:57:30.300991 4984 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a is running failed: container process not found" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 10:57:30 crc kubenswrapper[4984]: E0130 10:57:30.301070 4984 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-wmx8m" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.000513 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.001154 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.001208 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.002529 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.002713 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206" gracePeriod=600 Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.481836 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206" exitCode=0 Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.481903 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206"} Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.481976 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:57:39 crc kubenswrapper[4984]: E0130 10:57:39.579918 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 30 10:57:39 crc kubenswrapper[4984]: E0130 10:57:39.582467 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6mxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2281d2df-38c2-4c96-bff0-09cf745f1e50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:57:39 crc kubenswrapper[4984]: E0130 10:57:39.586219 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" Jan 30 10:57:39 crc kubenswrapper[4984]: I0130 10:57:39.998187 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.180461 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") pod \"3449a677-2462-4a6a-9855-f07157020548\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.180614 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") pod \"3449a677-2462-4a6a-9855-f07157020548\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.180771 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") pod \"3449a677-2462-4a6a-9855-f07157020548\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.182678 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities" (OuterVolumeSpecName: "utilities") pod "3449a677-2462-4a6a-9855-f07157020548" (UID: "3449a677-2462-4a6a-9855-f07157020548"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.192717 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd" (OuterVolumeSpecName: "kube-api-access-jjtvd") pod "3449a677-2462-4a6a-9855-f07157020548" (UID: "3449a677-2462-4a6a-9855-f07157020548"). InnerVolumeSpecName "kube-api-access-jjtvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.229702 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3449a677-2462-4a6a-9855-f07157020548" (UID: "3449a677-2462-4a6a-9855-f07157020548"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.283518 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") on node \"crc\" DevicePath \"\"" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.283553 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.283563 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.561824 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596"} Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.565132 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.565310 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerDied","Data":"7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb"} Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.565339 4984 scope.go:117] "RemoveContainer" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" Jan 30 10:57:40 crc kubenswrapper[4984]: E0130 10:57:40.566533 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.601030 4984 scope.go:117] "RemoveContainer" containerID="b76e60f6701192f196bc34da85eb2ca09a381fee06a5897ddbdd060f1efd8391" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.638299 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.648101 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.656390 4984 scope.go:117] "RemoveContainer" containerID="3bf5c8fde7d23cb63d4a5f2848e4706d5178b2204f74758b2c884ed0d6f76898" Jan 30 10:57:42 crc kubenswrapper[4984]: I0130 10:57:42.100671 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3449a677-2462-4a6a-9855-f07157020548" path="/var/lib/kubelet/pods/3449a677-2462-4a6a-9855-f07157020548/volumes" Jan 30 10:57:56 crc kubenswrapper[4984]: I0130 10:57:56.607129 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 10:57:57 crc kubenswrapper[4984]: I0130 10:57:57.750851 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2281d2df-38c2-4c96-bff0-09cf745f1e50","Type":"ContainerStarted","Data":"37815ab6b9c63edd08166ccf65de1c616d66f60323976a741d216a64b5e3a4ee"} Jan 30 10:57:57 crc kubenswrapper[4984]: I0130 10:57:57.772084 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.6964289900000002 podStartE2EDuration="1m0.772062293s" podCreationTimestamp="2026-01-30 10:56:57 +0000 UTC" firstStartedPulling="2026-01-30 10:56:59.529090647 +0000 UTC m=+2724.095394491" lastFinishedPulling="2026-01-30 10:57:56.60472396 +0000 UTC m=+2781.171027794" observedRunningTime="2026-01-30 10:57:57.767721696 +0000 UTC m=+2782.334025530" watchObservedRunningTime="2026-01-30 10:57:57.772062293 +0000 UTC m=+2782.338366117" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.363725 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:33 crc kubenswrapper[4984]: E0130 10:59:33.364828 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.364846 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" Jan 30 10:59:33 crc kubenswrapper[4984]: E0130 10:59:33.364862 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="extract-utilities" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.364869 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="extract-utilities" Jan 30 10:59:33 crc kubenswrapper[4984]: E0130 10:59:33.364888 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="extract-content" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.364895 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="extract-content" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.365136 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.366793 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.376586 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.537899 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.538227 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.538535 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641126 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641210 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641475 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641858 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.665499 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.686360 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:34 crc kubenswrapper[4984]: I0130 10:59:34.160201 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:34 crc kubenswrapper[4984]: I0130 10:59:34.675680 4984 generic.go:334] "Generic (PLEG): container finished" podID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerID="9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3" exitCode=0 Jan 30 10:59:34 crc kubenswrapper[4984]: I0130 10:59:34.676029 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerDied","Data":"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3"} Jan 30 10:59:34 crc kubenswrapper[4984]: I0130 10:59:34.676066 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerStarted","Data":"ea5d3b5f5673e849014555fd30ecbf70530ca246faf0657e13ceeeca759576d8"} Jan 30 10:59:35 crc kubenswrapper[4984]: I0130 10:59:35.723322 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerStarted","Data":"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636"} Jan 30 10:59:36 crc kubenswrapper[4984]: I0130 10:59:36.734624 4984 generic.go:334] "Generic (PLEG): container finished" podID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerID="5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636" exitCode=0 Jan 30 10:59:36 crc kubenswrapper[4984]: I0130 10:59:36.734814 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerDied","Data":"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636"} Jan 30 10:59:36 crc kubenswrapper[4984]: I0130 10:59:36.735089 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerStarted","Data":"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db"} Jan 30 10:59:36 crc kubenswrapper[4984]: I0130 10:59:36.765875 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-brx9l" podStartSLOduration=2.312826724 podStartE2EDuration="3.765855576s" podCreationTimestamp="2026-01-30 10:59:33 +0000 UTC" firstStartedPulling="2026-01-30 10:59:34.678579388 +0000 UTC m=+2879.244883212" lastFinishedPulling="2026-01-30 10:59:36.13160824 +0000 UTC m=+2880.697912064" observedRunningTime="2026-01-30 10:59:36.760943333 +0000 UTC m=+2881.327247157" watchObservedRunningTime="2026-01-30 10:59:36.765855576 +0000 UTC m=+2881.332159400" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.816224 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.818857 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.830461 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.832794 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.832974 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.833112 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935151 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935329 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935379 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935850 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935939 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.966404 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.151415 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.686739 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.687156 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.726085 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.745589 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.801619 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerStarted","Data":"2e2e8333c78d01f65b23bf3fae9e7a0414dd58408924cf41e559b3886dd8d439"} Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.864675 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:44 crc kubenswrapper[4984]: I0130 10:59:44.817217 4984 generic.go:334] "Generic (PLEG): container finished" podID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerID="04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48" exitCode=0 Jan 30 10:59:44 crc kubenswrapper[4984]: I0130 10:59:44.818402 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerDied","Data":"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48"} Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.141722 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.142331 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-brx9l" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="registry-server" containerID="cri-o://bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" gracePeriod=2 Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.628849 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.813294 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") pod \"99324cad-7c49-4be9-ad61-e9df70a2a954\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.813784 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") pod \"99324cad-7c49-4be9-ad61-e9df70a2a954\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.813986 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") pod \"99324cad-7c49-4be9-ad61-e9df70a2a954\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.814894 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities" (OuterVolumeSpecName: "utilities") pod "99324cad-7c49-4be9-ad61-e9df70a2a954" (UID: "99324cad-7c49-4be9-ad61-e9df70a2a954"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.826390 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7" (OuterVolumeSpecName: "kube-api-access-hfdt7") pod "99324cad-7c49-4be9-ad61-e9df70a2a954" (UID: "99324cad-7c49-4be9-ad61-e9df70a2a954"). InnerVolumeSpecName "kube-api-access-hfdt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.833461 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99324cad-7c49-4be9-ad61-e9df70a2a954" (UID: "99324cad-7c49-4be9-ad61-e9df70a2a954"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837509 4984 generic.go:334] "Generic (PLEG): container finished" podID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerID="bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" exitCode=0 Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837564 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerDied","Data":"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db"} Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837596 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerDied","Data":"ea5d3b5f5673e849014555fd30ecbf70530ca246faf0657e13ceeeca759576d8"} Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837619 4984 scope.go:117] "RemoveContainer" containerID="bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837790 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.916537 4984 scope.go:117] "RemoveContainer" containerID="5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.918239 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.918308 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.918322 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.947732 4984 scope.go:117] "RemoveContainer" containerID="9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.958424 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.969013 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.986139 4984 scope.go:117] "RemoveContainer" containerID="bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" Jan 30 10:59:46 crc kubenswrapper[4984]: E0130 10:59:46.987675 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db\": container with ID starting with bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db not found: ID does not exist" containerID="bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.987723 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db"} err="failed to get container status \"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db\": rpc error: code = NotFound desc = could not find container \"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db\": container with ID starting with bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db not found: ID does not exist" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.987749 4984 scope.go:117] "RemoveContainer" containerID="5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636" Jan 30 10:59:46 crc kubenswrapper[4984]: E0130 10:59:46.991813 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636\": container with ID starting with 5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636 not found: ID does not exist" containerID="5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.991869 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636"} err="failed to get container status \"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636\": rpc error: code = NotFound desc = could not find container \"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636\": container with ID starting with 5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636 not found: ID does not exist" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.991904 4984 scope.go:117] "RemoveContainer" containerID="9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3" Jan 30 10:59:46 crc kubenswrapper[4984]: E0130 10:59:46.992508 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3\": container with ID starting with 9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3 not found: ID does not exist" containerID="9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.992539 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3"} err="failed to get container status \"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3\": rpc error: code = NotFound desc = could not find container \"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3\": container with ID starting with 9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3 not found: ID does not exist" Jan 30 10:59:47 crc kubenswrapper[4984]: I0130 10:59:47.851044 4984 generic.go:334] "Generic (PLEG): container finished" podID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerID="2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e" exitCode=0 Jan 30 10:59:47 crc kubenswrapper[4984]: I0130 10:59:47.851099 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerDied","Data":"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e"} Jan 30 10:59:48 crc kubenswrapper[4984]: I0130 10:59:48.104717 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" path="/var/lib/kubelet/pods/99324cad-7c49-4be9-ad61-e9df70a2a954/volumes" Jan 30 10:59:48 crc kubenswrapper[4984]: I0130 10:59:48.864137 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerStarted","Data":"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5"} Jan 30 10:59:48 crc kubenswrapper[4984]: I0130 10:59:48.888525 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vzsl6" podStartSLOduration=4.237286675 podStartE2EDuration="6.888503393s" podCreationTimestamp="2026-01-30 10:59:42 +0000 UTC" firstStartedPulling="2026-01-30 10:59:45.826383206 +0000 UTC m=+2890.392687030" lastFinishedPulling="2026-01-30 10:59:48.477599914 +0000 UTC m=+2893.043903748" observedRunningTime="2026-01-30 10:59:48.880089506 +0000 UTC m=+2893.446393350" watchObservedRunningTime="2026-01-30 10:59:48.888503393 +0000 UTC m=+2893.454807217" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.151730 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.152383 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.209997 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.946379 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.993859 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:55 crc kubenswrapper[4984]: I0130 10:59:55.926885 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vzsl6" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="registry-server" containerID="cri-o://c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" gracePeriod=2 Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.390463 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.513700 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") pod \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.513820 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") pod \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.513915 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") pod \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.514729 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities" (OuterVolumeSpecName: "utilities") pod "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" (UID: "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.519490 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s" (OuterVolumeSpecName: "kube-api-access-s5d9s") pod "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" (UID: "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456"). InnerVolumeSpecName "kube-api-access-s5d9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.573679 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" (UID: "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.616464 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.616505 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.616516 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.940822 4984 generic.go:334] "Generic (PLEG): container finished" podID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerID="c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" exitCode=0 Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.940896 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerDied","Data":"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5"} Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.940922 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerDied","Data":"2e2e8333c78d01f65b23bf3fae9e7a0414dd58408924cf41e559b3886dd8d439"} Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.940939 4984 scope.go:117] "RemoveContainer" containerID="c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.941091 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.986522 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.003863 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.004924 4984 scope.go:117] "RemoveContainer" containerID="2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.027013 4984 scope.go:117] "RemoveContainer" containerID="04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.067761 4984 scope.go:117] "RemoveContainer" containerID="c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" Jan 30 10:59:57 crc kubenswrapper[4984]: E0130 10:59:57.068203 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5\": container with ID starting with c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5 not found: ID does not exist" containerID="c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.068232 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5"} err="failed to get container status \"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5\": rpc error: code = NotFound desc = could not find container \"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5\": container with ID starting with c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5 not found: ID does not exist" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.068274 4984 scope.go:117] "RemoveContainer" containerID="2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e" Jan 30 10:59:57 crc kubenswrapper[4984]: E0130 10:59:57.068489 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e\": container with ID starting with 2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e not found: ID does not exist" containerID="2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.068596 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e"} err="failed to get container status \"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e\": rpc error: code = NotFound desc = could not find container \"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e\": container with ID starting with 2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e not found: ID does not exist" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.068669 4984 scope.go:117] "RemoveContainer" containerID="04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48" Jan 30 10:59:57 crc kubenswrapper[4984]: E0130 10:59:57.069005 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48\": container with ID starting with 04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48 not found: ID does not exist" containerID="04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.069022 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48"} err="failed to get container status \"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48\": rpc error: code = NotFound desc = could not find container \"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48\": container with ID starting with 04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48 not found: ID does not exist" Jan 30 10:59:58 crc kubenswrapper[4984]: I0130 10:59:58.102344 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" path="/var/lib/kubelet/pods/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456/volumes" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.147385 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll"] Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148185 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="extract-utilities" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148203 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="extract-utilities" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148228 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148237 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148263 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148271 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148281 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="extract-content" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148288 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="extract-content" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148305 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="extract-content" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148312 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="extract-content" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148331 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="extract-utilities" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148338 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="extract-utilities" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148556 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148592 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.149323 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.152721 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.152816 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.161191 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll"] Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.290540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.290603 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.290633 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.392131 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.392203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.392228 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.393405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.400144 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.409426 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.471478 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.922574 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll"] Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.979294 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" event={"ID":"f298f764-3b24-4f9e-91a8-3f20d3a73f2b","Type":"ContainerStarted","Data":"5121b9eaad056f4b4a78fc124df250e9f4a872cb7101db2ea1e2c8e3ca41fdc8"} Jan 30 11:00:01 crc kubenswrapper[4984]: I0130 11:00:01.989032 4984 generic.go:334] "Generic (PLEG): container finished" podID="f298f764-3b24-4f9e-91a8-3f20d3a73f2b" containerID="401d0865ac4589550574009d624842e59780a90d3f55b79cf9e51e5b49483b0a" exitCode=0 Jan 30 11:00:01 crc kubenswrapper[4984]: I0130 11:00:01.989117 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" event={"ID":"f298f764-3b24-4f9e-91a8-3f20d3a73f2b","Type":"ContainerDied","Data":"401d0865ac4589550574009d624842e59780a90d3f55b79cf9e51e5b49483b0a"} Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.000975 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.001205 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.322129 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.480618 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") pod \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.480735 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") pod \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.480824 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") pod \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.482472 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume" (OuterVolumeSpecName: "config-volume") pod "f298f764-3b24-4f9e-91a8-3f20d3a73f2b" (UID: "f298f764-3b24-4f9e-91a8-3f20d3a73f2b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.488597 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9" (OuterVolumeSpecName: "kube-api-access-xsjq9") pod "f298f764-3b24-4f9e-91a8-3f20d3a73f2b" (UID: "f298f764-3b24-4f9e-91a8-3f20d3a73f2b"). InnerVolumeSpecName "kube-api-access-xsjq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.492376 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f298f764-3b24-4f9e-91a8-3f20d3a73f2b" (UID: "f298f764-3b24-4f9e-91a8-3f20d3a73f2b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.583486 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.583515 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.583526 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") on node \"crc\" DevicePath \"\"" Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.006432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" event={"ID":"f298f764-3b24-4f9e-91a8-3f20d3a73f2b","Type":"ContainerDied","Data":"5121b9eaad056f4b4a78fc124df250e9f4a872cb7101db2ea1e2c8e3ca41fdc8"} Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.006480 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5121b9eaad056f4b4a78fc124df250e9f4a872cb7101db2ea1e2c8e3ca41fdc8" Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.006537 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.397738 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.408377 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 11:00:06 crc kubenswrapper[4984]: I0130 11:00:06.104237 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" path="/var/lib/kubelet/pods/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9/volumes" Jan 30 11:00:24 crc kubenswrapper[4984]: I0130 11:00:24.113322 4984 scope.go:117] "RemoveContainer" containerID="626343e1690b32284633537d7a0abbeeacd79d429e95b363b4efee829760178b" Jan 30 11:00:33 crc kubenswrapper[4984]: I0130 11:00:33.000212 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:00:33 crc kubenswrapper[4984]: I0130 11:00:33.000676 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.160691 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496181-vtxhk"] Jan 30 11:01:00 crc kubenswrapper[4984]: E0130 11:01:00.161788 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f298f764-3b24-4f9e-91a8-3f20d3a73f2b" containerName="collect-profiles" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.161809 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f298f764-3b24-4f9e-91a8-3f20d3a73f2b" containerName="collect-profiles" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.162101 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f298f764-3b24-4f9e-91a8-3f20d3a73f2b" containerName="collect-profiles" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.162937 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.200520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496181-vtxhk"] Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.236033 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.236135 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.236160 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.236333 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.338400 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.338711 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.338997 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.339156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.348090 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.352283 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.352799 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.357047 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.491728 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:01 crc kubenswrapper[4984]: I0130 11:01:00.999941 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496181-vtxhk"] Jan 30 11:01:01 crc kubenswrapper[4984]: I0130 11:01:01.521001 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496181-vtxhk" event={"ID":"a5d9b60c-98e5-4132-9193-0b13ac2893a5","Type":"ContainerStarted","Data":"25870fe2cac41ccc743d8289c89046006ae4e9f58b3b6e0dcc1b355bd344b714"} Jan 30 11:01:01 crc kubenswrapper[4984]: I0130 11:01:01.521332 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496181-vtxhk" event={"ID":"a5d9b60c-98e5-4132-9193-0b13ac2893a5","Type":"ContainerStarted","Data":"43c0c25f85975483133d041e796d36a8932fe77b7b089ce8bfe96ba54edf4d05"} Jan 30 11:01:01 crc kubenswrapper[4984]: I0130 11:01:01.538437 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496181-vtxhk" podStartSLOduration=1.538418481 podStartE2EDuration="1.538418481s" podCreationTimestamp="2026-01-30 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 11:01:01.534812223 +0000 UTC m=+2966.101116047" watchObservedRunningTime="2026-01-30 11:01:01.538418481 +0000 UTC m=+2966.104722315" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.000756 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.001171 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.001278 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.002175 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.002281 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" gracePeriod=600 Jan 30 11:01:03 crc kubenswrapper[4984]: E0130 11:01:03.130861 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.545127 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" exitCode=0 Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.545608 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596"} Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.545862 4984 scope.go:117] "RemoveContainer" containerID="5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.546533 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:03 crc kubenswrapper[4984]: E0130 11:01:03.546842 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.549200 4984 generic.go:334] "Generic (PLEG): container finished" podID="a5d9b60c-98e5-4132-9193-0b13ac2893a5" containerID="25870fe2cac41ccc743d8289c89046006ae4e9f58b3b6e0dcc1b355bd344b714" exitCode=0 Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.549264 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496181-vtxhk" event={"ID":"a5d9b60c-98e5-4132-9193-0b13ac2893a5","Type":"ContainerDied","Data":"25870fe2cac41ccc743d8289c89046006ae4e9f58b3b6e0dcc1b355bd344b714"} Jan 30 11:01:04 crc kubenswrapper[4984]: I0130 11:01:04.947586 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.041198 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") pod \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.041535 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") pod \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.041631 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") pod \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.041722 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") pod \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.052089 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a5d9b60c-98e5-4132-9193-0b13ac2893a5" (UID: "a5d9b60c-98e5-4132-9193-0b13ac2893a5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.052151 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp" (OuterVolumeSpecName: "kube-api-access-nhwjp") pod "a5d9b60c-98e5-4132-9193-0b13ac2893a5" (UID: "a5d9b60c-98e5-4132-9193-0b13ac2893a5"). InnerVolumeSpecName "kube-api-access-nhwjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.076328 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5d9b60c-98e5-4132-9193-0b13ac2893a5" (UID: "a5d9b60c-98e5-4132-9193-0b13ac2893a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.105071 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data" (OuterVolumeSpecName: "config-data") pod "a5d9b60c-98e5-4132-9193-0b13ac2893a5" (UID: "a5d9b60c-98e5-4132-9193-0b13ac2893a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.144286 4984 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.144321 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") on node \"crc\" DevicePath \"\"" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.144335 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.144345 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.576597 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496181-vtxhk" event={"ID":"a5d9b60c-98e5-4132-9193-0b13ac2893a5","Type":"ContainerDied","Data":"43c0c25f85975483133d041e796d36a8932fe77b7b089ce8bfe96ba54edf4d05"} Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.576639 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.576653 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43c0c25f85975483133d041e796d36a8932fe77b7b089ce8bfe96ba54edf4d05" Jan 30 11:01:17 crc kubenswrapper[4984]: I0130 11:01:17.089997 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:17 crc kubenswrapper[4984]: E0130 11:01:17.090791 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:29 crc kubenswrapper[4984]: I0130 11:01:29.090205 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:29 crc kubenswrapper[4984]: E0130 11:01:29.091209 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:44 crc kubenswrapper[4984]: I0130 11:01:44.091764 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:44 crc kubenswrapper[4984]: E0130 11:01:44.092519 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:58 crc kubenswrapper[4984]: I0130 11:01:58.091819 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:58 crc kubenswrapper[4984]: E0130 11:01:58.092625 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:10 crc kubenswrapper[4984]: I0130 11:02:10.091269 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:10 crc kubenswrapper[4984]: E0130 11:02:10.092006 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:22 crc kubenswrapper[4984]: I0130 11:02:22.090947 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:22 crc kubenswrapper[4984]: E0130 11:02:22.091740 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:36 crc kubenswrapper[4984]: I0130 11:02:36.098848 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:36 crc kubenswrapper[4984]: E0130 11:02:36.099853 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:48 crc kubenswrapper[4984]: I0130 11:02:48.094784 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:48 crc kubenswrapper[4984]: E0130 11:02:48.095409 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:59 crc kubenswrapper[4984]: I0130 11:02:59.090519 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:59 crc kubenswrapper[4984]: E0130 11:02:59.091206 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:03:10 crc kubenswrapper[4984]: I0130 11:03:10.090748 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:03:10 crc kubenswrapper[4984]: E0130 11:03:10.091580 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:03:24 crc kubenswrapper[4984]: I0130 11:03:24.091474 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:03:24 crc kubenswrapper[4984]: E0130 11:03:24.092398 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:03:37 crc kubenswrapper[4984]: I0130 11:03:37.091131 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:03:37 crc kubenswrapper[4984]: E0130 11:03:37.091946 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:03:52 crc kubenswrapper[4984]: I0130 11:03:52.090916 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:03:52 crc kubenswrapper[4984]: E0130 11:03:52.091494 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:06 crc kubenswrapper[4984]: I0130 11:04:06.096321 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:06 crc kubenswrapper[4984]: E0130 11:04:06.097116 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:17 crc kubenswrapper[4984]: I0130 11:04:17.090688 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:17 crc kubenswrapper[4984]: E0130 11:04:17.091488 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:32 crc kubenswrapper[4984]: I0130 11:04:32.090142 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:32 crc kubenswrapper[4984]: E0130 11:04:32.091774 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:46 crc kubenswrapper[4984]: I0130 11:04:46.099058 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:46 crc kubenswrapper[4984]: E0130 11:04:46.100677 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:59 crc kubenswrapper[4984]: I0130 11:04:59.090450 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:59 crc kubenswrapper[4984]: E0130 11:04:59.091059 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:05:11 crc kubenswrapper[4984]: I0130 11:05:11.090972 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:05:11 crc kubenswrapper[4984]: E0130 11:05:11.092039 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:05:22 crc kubenswrapper[4984]: I0130 11:05:22.091936 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:05:22 crc kubenswrapper[4984]: E0130 11:05:22.092968 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:05:35 crc kubenswrapper[4984]: I0130 11:05:35.090322 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:05:35 crc kubenswrapper[4984]: E0130 11:05:35.092258 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:05:48 crc kubenswrapper[4984]: I0130 11:05:48.093704 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:05:48 crc kubenswrapper[4984]: E0130 11:05:48.094729 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:06:00 crc kubenswrapper[4984]: I0130 11:06:00.090981 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:06:00 crc kubenswrapper[4984]: E0130 11:06:00.091807 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:06:11 crc kubenswrapper[4984]: I0130 11:06:11.091100 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:06:11 crc kubenswrapper[4984]: I0130 11:06:11.633758 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af"} Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.649552 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:06:59 crc kubenswrapper[4984]: E0130 11:06:59.653188 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d9b60c-98e5-4132-9193-0b13ac2893a5" containerName="keystone-cron" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.653267 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d9b60c-98e5-4132-9193-0b13ac2893a5" containerName="keystone-cron" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.653583 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d9b60c-98e5-4132-9193-0b13ac2893a5" containerName="keystone-cron" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.655543 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.662122 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.736412 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.736597 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.736653 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.838660 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.838770 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.838827 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.839191 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.839467 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.863268 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.990380 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:00 crc kubenswrapper[4984]: I0130 11:07:00.457896 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:07:01 crc kubenswrapper[4984]: I0130 11:07:01.154767 4984 generic.go:334] "Generic (PLEG): container finished" podID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerID="0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5" exitCode=0 Jan 30 11:07:01 crc kubenswrapper[4984]: I0130 11:07:01.154829 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerDied","Data":"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5"} Jan 30 11:07:01 crc kubenswrapper[4984]: I0130 11:07:01.155216 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerStarted","Data":"f64c210d3c51f28fad42d4986b14441accf489347c81125876407ded779a91f9"} Jan 30 11:07:01 crc kubenswrapper[4984]: I0130 11:07:01.157173 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 11:07:02 crc kubenswrapper[4984]: I0130 11:07:02.172973 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerStarted","Data":"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8"} Jan 30 11:07:04 crc kubenswrapper[4984]: I0130 11:07:04.203188 4984 generic.go:334] "Generic (PLEG): container finished" podID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerID="e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8" exitCode=0 Jan 30 11:07:04 crc kubenswrapper[4984]: I0130 11:07:04.204431 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerDied","Data":"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8"} Jan 30 11:07:06 crc kubenswrapper[4984]: I0130 11:07:06.230261 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerStarted","Data":"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307"} Jan 30 11:07:06 crc kubenswrapper[4984]: I0130 11:07:06.256729 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2ngm6" podStartSLOduration=3.340874978 podStartE2EDuration="7.256702308s" podCreationTimestamp="2026-01-30 11:06:59 +0000 UTC" firstStartedPulling="2026-01-30 11:07:01.156961241 +0000 UTC m=+3325.723265065" lastFinishedPulling="2026-01-30 11:07:05.072788571 +0000 UTC m=+3329.639092395" observedRunningTime="2026-01-30 11:07:06.248183358 +0000 UTC m=+3330.814487182" watchObservedRunningTime="2026-01-30 11:07:06.256702308 +0000 UTC m=+3330.823006142" Jan 30 11:07:09 crc kubenswrapper[4984]: I0130 11:07:09.991004 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:09 crc kubenswrapper[4984]: I0130 11:07:09.992800 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:11 crc kubenswrapper[4984]: I0130 11:07:11.076245 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2ngm6" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" probeResult="failure" output=< Jan 30 11:07:11 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 11:07:11 crc kubenswrapper[4984]: > Jan 30 11:07:20 crc kubenswrapper[4984]: I0130 11:07:20.074638 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:20 crc kubenswrapper[4984]: I0130 11:07:20.154644 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:20 crc kubenswrapper[4984]: I0130 11:07:20.315052 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:07:21 crc kubenswrapper[4984]: I0130 11:07:21.398243 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2ngm6" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" containerID="cri-o://b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" gracePeriod=2 Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.083945 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.226003 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") pod \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.226651 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") pod \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.226713 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") pod \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.227574 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities" (OuterVolumeSpecName: "utilities") pod "b3702dd8-6210-4f96-a5de-eeabe7c42deb" (UID: "b3702dd8-6210-4f96-a5de-eeabe7c42deb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.231374 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn" (OuterVolumeSpecName: "kube-api-access-g9znn") pod "b3702dd8-6210-4f96-a5de-eeabe7c42deb" (UID: "b3702dd8-6210-4f96-a5de-eeabe7c42deb"). InnerVolumeSpecName "kube-api-access-g9znn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.329592 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.329622 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.372875 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3702dd8-6210-4f96-a5de-eeabe7c42deb" (UID: "b3702dd8-6210-4f96-a5de-eeabe7c42deb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408353 4984 generic.go:334] "Generic (PLEG): container finished" podID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerID="b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" exitCode=0 Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408391 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerDied","Data":"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307"} Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408416 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerDied","Data":"f64c210d3c51f28fad42d4986b14441accf489347c81125876407ded779a91f9"} Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408431 4984 scope.go:117] "RemoveContainer" containerID="b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408533 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.431710 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.441023 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.446411 4984 scope.go:117] "RemoveContainer" containerID="e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.453076 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.474748 4984 scope.go:117] "RemoveContainer" containerID="0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.522987 4984 scope.go:117] "RemoveContainer" containerID="b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" Jan 30 11:07:22 crc kubenswrapper[4984]: E0130 11:07:22.523502 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307\": container with ID starting with b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307 not found: ID does not exist" containerID="b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.523549 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307"} err="failed to get container status \"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307\": rpc error: code = NotFound desc = could not find container \"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307\": container with ID starting with b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307 not found: ID does not exist" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.523577 4984 scope.go:117] "RemoveContainer" containerID="e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8" Jan 30 11:07:22 crc kubenswrapper[4984]: E0130 11:07:22.524038 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8\": container with ID starting with e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8 not found: ID does not exist" containerID="e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.524073 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8"} err="failed to get container status \"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8\": rpc error: code = NotFound desc = could not find container \"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8\": container with ID starting with e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8 not found: ID does not exist" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.524095 4984 scope.go:117] "RemoveContainer" containerID="0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5" Jan 30 11:07:22 crc kubenswrapper[4984]: E0130 11:07:22.524539 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5\": container with ID starting with 0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5 not found: ID does not exist" containerID="0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.524568 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5"} err="failed to get container status \"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5\": rpc error: code = NotFound desc = could not find container \"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5\": container with ID starting with 0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5 not found: ID does not exist" Jan 30 11:07:24 crc kubenswrapper[4984]: I0130 11:07:24.106963 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" path="/var/lib/kubelet/pods/b3702dd8-6210-4f96-a5de-eeabe7c42deb/volumes" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.249860 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:28 crc kubenswrapper[4984]: E0130 11:07:28.250822 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.250838 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" Jan 30 11:07:28 crc kubenswrapper[4984]: E0130 11:07:28.250867 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="extract-content" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.250875 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="extract-content" Jan 30 11:07:28 crc kubenswrapper[4984]: E0130 11:07:28.250887 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="extract-utilities" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.250894 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="extract-utilities" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.251061 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.252519 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.260452 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.361549 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.361609 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.361683 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.462983 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.463153 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.463183 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.463764 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.463824 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.487126 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.582603 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:29 crc kubenswrapper[4984]: I0130 11:07:29.025192 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:29 crc kubenswrapper[4984]: I0130 11:07:29.514665 4984 generic.go:334] "Generic (PLEG): container finished" podID="56a2e96e-1695-43f9-b487-f79599171463" containerID="27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab" exitCode=0 Jan 30 11:07:29 crc kubenswrapper[4984]: I0130 11:07:29.514720 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerDied","Data":"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab"} Jan 30 11:07:29 crc kubenswrapper[4984]: I0130 11:07:29.514763 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerStarted","Data":"8784493f0ec0a5cbfd98cba95b8ac016872ae00f15be3a0f8f39b96f70990a0b"} Jan 30 11:07:30 crc kubenswrapper[4984]: I0130 11:07:30.529927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerStarted","Data":"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf"} Jan 30 11:07:31 crc kubenswrapper[4984]: I0130 11:07:31.541970 4984 generic.go:334] "Generic (PLEG): container finished" podID="56a2e96e-1695-43f9-b487-f79599171463" containerID="09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf" exitCode=0 Jan 30 11:07:31 crc kubenswrapper[4984]: I0130 11:07:31.542072 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerDied","Data":"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf"} Jan 30 11:07:32 crc kubenswrapper[4984]: I0130 11:07:32.554316 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerStarted","Data":"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6"} Jan 30 11:07:32 crc kubenswrapper[4984]: I0130 11:07:32.584936 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m822f" podStartSLOduration=2.151622542 podStartE2EDuration="4.584907853s" podCreationTimestamp="2026-01-30 11:07:28 +0000 UTC" firstStartedPulling="2026-01-30 11:07:29.524933246 +0000 UTC m=+3354.091237070" lastFinishedPulling="2026-01-30 11:07:31.958218547 +0000 UTC m=+3356.524522381" observedRunningTime="2026-01-30 11:07:32.580969737 +0000 UTC m=+3357.147273561" watchObservedRunningTime="2026-01-30 11:07:32.584907853 +0000 UTC m=+3357.151211697" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.584437 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.585291 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.660527 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.748822 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.907391 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:40 crc kubenswrapper[4984]: I0130 11:07:40.627814 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m822f" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="registry-server" containerID="cri-o://499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" gracePeriod=2 Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.410023 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.572403 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") pod \"56a2e96e-1695-43f9-b487-f79599171463\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.572547 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") pod \"56a2e96e-1695-43f9-b487-f79599171463\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.572774 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") pod \"56a2e96e-1695-43f9-b487-f79599171463\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.574438 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities" (OuterVolumeSpecName: "utilities") pod "56a2e96e-1695-43f9-b487-f79599171463" (UID: "56a2e96e-1695-43f9-b487-f79599171463"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.578628 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9" (OuterVolumeSpecName: "kube-api-access-9kdf9") pod "56a2e96e-1695-43f9-b487-f79599171463" (UID: "56a2e96e-1695-43f9-b487-f79599171463"). InnerVolumeSpecName "kube-api-access-9kdf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.641520 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.641574 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerDied","Data":"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6"} Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.642339 4984 scope.go:117] "RemoveContainer" containerID="499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.641522 4984 generic.go:334] "Generic (PLEG): container finished" podID="56a2e96e-1695-43f9-b487-f79599171463" containerID="499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" exitCode=0 Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.642470 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerDied","Data":"8784493f0ec0a5cbfd98cba95b8ac016872ae00f15be3a0f8f39b96f70990a0b"} Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.675610 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.675666 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.676420 4984 scope.go:117] "RemoveContainer" containerID="09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.714941 4984 scope.go:117] "RemoveContainer" containerID="27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.718558 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56a2e96e-1695-43f9-b487-f79599171463" (UID: "56a2e96e-1695-43f9-b487-f79599171463"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.746529 4984 scope.go:117] "RemoveContainer" containerID="499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" Jan 30 11:07:41 crc kubenswrapper[4984]: E0130 11:07:41.747039 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6\": container with ID starting with 499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6 not found: ID does not exist" containerID="499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.747115 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6"} err="failed to get container status \"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6\": rpc error: code = NotFound desc = could not find container \"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6\": container with ID starting with 499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6 not found: ID does not exist" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.747164 4984 scope.go:117] "RemoveContainer" containerID="09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf" Jan 30 11:07:41 crc kubenswrapper[4984]: E0130 11:07:41.747576 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf\": container with ID starting with 09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf not found: ID does not exist" containerID="09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.747618 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf"} err="failed to get container status \"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf\": rpc error: code = NotFound desc = could not find container \"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf\": container with ID starting with 09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf not found: ID does not exist" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.747646 4984 scope.go:117] "RemoveContainer" containerID="27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab" Jan 30 11:07:41 crc kubenswrapper[4984]: E0130 11:07:41.748029 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab\": container with ID starting with 27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab not found: ID does not exist" containerID="27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.748120 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab"} err="failed to get container status \"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab\": rpc error: code = NotFound desc = could not find container \"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab\": container with ID starting with 27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab not found: ID does not exist" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.777057 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:42 crc kubenswrapper[4984]: I0130 11:07:42.017276 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:42 crc kubenswrapper[4984]: I0130 11:07:42.029927 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:42 crc kubenswrapper[4984]: I0130 11:07:42.102383 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56a2e96e-1695-43f9-b487-f79599171463" path="/var/lib/kubelet/pods/56a2e96e-1695-43f9-b487-f79599171463/volumes" Jan 30 11:08:33 crc kubenswrapper[4984]: I0130 11:08:33.001342 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:08:33 crc kubenswrapper[4984]: I0130 11:08:33.001956 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:09:03 crc kubenswrapper[4984]: I0130 11:09:03.000932 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:09:03 crc kubenswrapper[4984]: I0130 11:09:03.001528 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:09:07 crc kubenswrapper[4984]: I0130 11:09:07.573218 4984 generic.go:334] "Generic (PLEG): container finished" podID="2281d2df-38c2-4c96-bff0-09cf745f1e50" containerID="37815ab6b9c63edd08166ccf65de1c616d66f60323976a741d216a64b5e3a4ee" exitCode=0 Jan 30 11:09:07 crc kubenswrapper[4984]: I0130 11:09:07.573387 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2281d2df-38c2-4c96-bff0-09cf745f1e50","Type":"ContainerDied","Data":"37815ab6b9c63edd08166ccf65de1c616d66f60323976a741d216a64b5e3a4ee"} Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.060539 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171536 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171622 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171706 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171856 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171942 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.172010 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.172085 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.172120 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.172164 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.177675 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.178081 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data" (OuterVolumeSpecName: "config-data") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.179726 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.181471 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc" (OuterVolumeSpecName: "kube-api-access-m6mxc") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "kube-api-access-m6mxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.181759 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.211869 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.223880 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.224588 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.237903 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275129 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275195 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275214 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275226 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275238 4984 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275286 4984 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275300 4984 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275373 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275393 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.307955 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.377783 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.599801 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2281d2df-38c2-4c96-bff0-09cf745f1e50","Type":"ContainerDied","Data":"b6e20c129e5f1a30f1d5e8bbe28d03846430b2c36243a804176ef658d344f75a"} Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.599849 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6e20c129e5f1a30f1d5e8bbe28d03846430b2c36243a804176ef658d344f75a" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.600112 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.909394 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 11:09:13 crc kubenswrapper[4984]: E0130 11:09:13.910680 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="registry-server" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.910701 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="registry-server" Jan 30 11:09:13 crc kubenswrapper[4984]: E0130 11:09:13.910720 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="extract-content" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.910745 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="extract-content" Jan 30 11:09:13 crc kubenswrapper[4984]: E0130 11:09:13.910763 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" containerName="tempest-tests-tempest-tests-runner" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.910774 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" containerName="tempest-tests-tempest-tests-runner" Jan 30 11:09:13 crc kubenswrapper[4984]: E0130 11:09:13.910788 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="extract-utilities" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.910796 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="extract-utilities" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.911030 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" containerName="tempest-tests-tempest-tests-runner" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.911047 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="registry-server" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.911881 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.915517 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-68tn4" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.931885 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.978729 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.978779 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287hp\" (UniqueName: \"kubernetes.io/projected/d46e480c-151c-4f4c-a1c8-bbad4b31d37b-kube-api-access-287hp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.081694 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.081783 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-287hp\" (UniqueName: \"kubernetes.io/projected/d46e480c-151c-4f4c-a1c8-bbad4b31d37b-kube-api-access-287hp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.082472 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.115397 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-287hp\" (UniqueName: \"kubernetes.io/projected/d46e480c-151c-4f4c-a1c8-bbad4b31d37b-kube-api-access-287hp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.134412 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.252658 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.536867 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.658345 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d46e480c-151c-4f4c-a1c8-bbad4b31d37b","Type":"ContainerStarted","Data":"ffbd0bd5cf7579735e9bdb0137bc69592501d71382a683ff2542375d0b9b62b4"} Jan 30 11:09:15 crc kubenswrapper[4984]: I0130 11:09:15.689103 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.76024037 podStartE2EDuration="2.689078748s" podCreationTimestamp="2026-01-30 11:09:13 +0000 UTC" firstStartedPulling="2026-01-30 11:09:14.539517483 +0000 UTC m=+3459.105821347" lastFinishedPulling="2026-01-30 11:09:15.468355901 +0000 UTC m=+3460.034659725" observedRunningTime="2026-01-30 11:09:15.686901709 +0000 UTC m=+3460.253205543" watchObservedRunningTime="2026-01-30 11:09:15.689078748 +0000 UTC m=+3460.255382612" Jan 30 11:09:16 crc kubenswrapper[4984]: I0130 11:09:16.681904 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d46e480c-151c-4f4c-a1c8-bbad4b31d37b","Type":"ContainerStarted","Data":"b1bc47d761760227c1f90fe9f59df9222863fef67373339f9d1d1b59bd702678"} Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.001408 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.002240 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.002369 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.003589 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.003695 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af" gracePeriod=600 Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.865403 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af" exitCode=0 Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.865455 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af"} Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.866033 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed"} Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.866102 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.872223 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.874509 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.880796 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gjn52"/"openshift-service-ca.crt" Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.881025 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gjn52"/"kube-root-ca.crt" Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.896988 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.009923 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.010019 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.111621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.111719 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.112376 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.157735 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.192674 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: W0130 11:09:38.666093 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19549b5_8918_4ff3_b266_67d6d2ef2c3f.slice/crio-935a8a8c4e30ebac97b0cbd9ddadd9a8795c13eb568874c34663645395b841e7 WatchSource:0}: Error finding container 935a8a8c4e30ebac97b0cbd9ddadd9a8795c13eb568874c34663645395b841e7: Status 404 returned error can't find the container with id 935a8a8c4e30ebac97b0cbd9ddadd9a8795c13eb568874c34663645395b841e7 Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.677373 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.924863 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjn52/must-gather-7xpvm" event={"ID":"e19549b5-8918-4ff3-b266-67d6d2ef2c3f","Type":"ContainerStarted","Data":"935a8a8c4e30ebac97b0cbd9ddadd9a8795c13eb568874c34663645395b841e7"} Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.276373 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.279261 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.291165 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.397527 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.397618 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.397661 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.499726 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.499902 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.499972 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.500788 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.501111 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.530358 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.621237 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.282751 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.287039 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.305103 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.412831 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.413041 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.413155 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514311 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514353 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514883 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514917 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.536442 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.611611 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:49 crc kubenswrapper[4984]: I0130 11:09:49.815193 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" podUID="48ae7d4f-38b1-40c0-ad61-815992265930" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 11:09:51 crc kubenswrapper[4984]: I0130 11:09:51.816159 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" podUID="48ae7d4f-38b1-40c0-ad61-815992265930" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 11:09:54 crc kubenswrapper[4984]: E0130 11:09:54.216540 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Jan 30 11:09:54 crc kubenswrapper[4984]: E0130 11:09:54.217116 4984 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 11:09:54 crc kubenswrapper[4984]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Jan 30 11:09:54 crc kubenswrapper[4984]: HAVE_SESSION_TOOLS=true Jan 30 11:09:54 crc kubenswrapper[4984]: else Jan 30 11:09:54 crc kubenswrapper[4984]: HAVE_SESSION_TOOLS=false Jan 30 11:09:54 crc kubenswrapper[4984]: fi Jan 30 11:09:54 crc kubenswrapper[4984]: Jan 30 11:09:54 crc kubenswrapper[4984]: Jan 30 11:09:54 crc kubenswrapper[4984]: echo "[disk usage checker] Started" Jan 30 11:09:54 crc kubenswrapper[4984]: target_dir="/must-gather" Jan 30 11:09:54 crc kubenswrapper[4984]: usage_percentage_limit="80" Jan 30 11:09:54 crc kubenswrapper[4984]: while true; do Jan 30 11:09:54 crc kubenswrapper[4984]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Jan 30 11:09:54 crc kubenswrapper[4984]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Jan 30 11:09:54 crc kubenswrapper[4984]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Jan 30 11:09:54 crc kubenswrapper[4984]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Jan 30 11:09:54 crc kubenswrapper[4984]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 30 11:09:54 crc kubenswrapper[4984]: ps -o sess --no-headers | sort -u | while read sid; do Jan 30 11:09:54 crc kubenswrapper[4984]: [[ "$sid" -eq "${$}" ]] && continue Jan 30 11:09:54 crc kubenswrapper[4984]: pkill --signal SIGKILL --session "$sid" Jan 30 11:09:54 crc kubenswrapper[4984]: done Jan 30 11:09:54 crc kubenswrapper[4984]: else Jan 30 11:09:54 crc kubenswrapper[4984]: kill 0 Jan 30 11:09:54 crc kubenswrapper[4984]: fi Jan 30 11:09:54 crc kubenswrapper[4984]: exit 1 Jan 30 11:09:54 crc kubenswrapper[4984]: fi Jan 30 11:09:54 crc kubenswrapper[4984]: sleep 5 Jan 30 11:09:54 crc kubenswrapper[4984]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 30 11:09:54 crc kubenswrapper[4984]: setsid -w bash <<-MUSTGATHER_EOF Jan 30 11:09:54 crc kubenswrapper[4984]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 30 11:09:54 crc kubenswrapper[4984]: MUSTGATHER_EOF Jan 30 11:09:54 crc kubenswrapper[4984]: else Jan 30 11:09:54 crc kubenswrapper[4984]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 30 11:09:54 crc kubenswrapper[4984]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmqxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-7xpvm_openshift-must-gather-gjn52(e19549b5-8918-4ff3-b266-67d6d2ef2c3f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 30 11:09:54 crc kubenswrapper[4984]: > logger="UnhandledError" Jan 30 11:09:54 crc kubenswrapper[4984]: E0130 11:09:54.220580 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-gjn52/must-gather-7xpvm" podUID="e19549b5-8918-4ff3-b266-67d6d2ef2c3f" Jan 30 11:09:54 crc kubenswrapper[4984]: I0130 11:09:54.564180 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:09:54 crc kubenswrapper[4984]: W0130 11:09:54.637119 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod481c11d6_78db_4d13_a7e1_a25934756df0.slice/crio-24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e WatchSource:0}: Error finding container 24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e: Status 404 returned error can't find the container with id 24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e Jan 30 11:09:54 crc kubenswrapper[4984]: I0130 11:09:54.643443 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:09:55 crc kubenswrapper[4984]: I0130 11:09:55.123580 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerStarted","Data":"370a6116438d07c6ecca66252e5f79af0c94d4041ac1898cb8bc8db4b2616376"} Jan 30 11:09:55 crc kubenswrapper[4984]: I0130 11:09:55.125465 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerStarted","Data":"24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e"} Jan 30 11:09:55 crc kubenswrapper[4984]: E0130 11:09:55.128433 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-gjn52/must-gather-7xpvm" podUID="e19549b5-8918-4ff3-b266-67d6d2ef2c3f" Jan 30 11:09:56 crc kubenswrapper[4984]: I0130 11:09:56.152487 4984 generic.go:334] "Generic (PLEG): container finished" podID="481c11d6-78db-4d13-a7e1-a25934756df0" containerID="db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152" exitCode=0 Jan 30 11:09:56 crc kubenswrapper[4984]: I0130 11:09:56.152557 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerDied","Data":"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152"} Jan 30 11:09:57 crc kubenswrapper[4984]: I0130 11:09:57.164542 4984 generic.go:334] "Generic (PLEG): container finished" podID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerID="4f6fa029dafb176ca120c9a84e5bdb9255646ab3a812e7caee5db132d502df62" exitCode=0 Jan 30 11:09:57 crc kubenswrapper[4984]: I0130 11:09:57.166313 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerDied","Data":"4f6fa029dafb176ca120c9a84e5bdb9255646ab3a812e7caee5db132d502df62"} Jan 30 11:09:58 crc kubenswrapper[4984]: I0130 11:09:58.181574 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerStarted","Data":"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e"} Jan 30 11:09:59 crc kubenswrapper[4984]: I0130 11:09:59.193020 4984 generic.go:334] "Generic (PLEG): container finished" podID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerID="6fbec1b473389c2ea89dfc5a4a1d947f1b6cdf436469b3bbe3b0f3e2b70a0829" exitCode=0 Jan 30 11:09:59 crc kubenswrapper[4984]: I0130 11:09:59.193172 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerDied","Data":"6fbec1b473389c2ea89dfc5a4a1d947f1b6cdf436469b3bbe3b0f3e2b70a0829"} Jan 30 11:09:59 crc kubenswrapper[4984]: I0130 11:09:59.195752 4984 generic.go:334] "Generic (PLEG): container finished" podID="481c11d6-78db-4d13-a7e1-a25934756df0" containerID="112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e" exitCode=0 Jan 30 11:09:59 crc kubenswrapper[4984]: I0130 11:09:59.195804 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerDied","Data":"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e"} Jan 30 11:10:00 crc kubenswrapper[4984]: I0130 11:10:00.207337 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerStarted","Data":"7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787"} Jan 30 11:10:00 crc kubenswrapper[4984]: I0130 11:10:00.210357 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerStarted","Data":"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667"} Jan 30 11:10:00 crc kubenswrapper[4984]: I0130 11:10:00.238312 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cjkxq" podStartSLOduration=10.779091177 podStartE2EDuration="13.238292396s" podCreationTimestamp="2026-01-30 11:09:47 +0000 UTC" firstStartedPulling="2026-01-30 11:09:57.167899305 +0000 UTC m=+3501.734203169" lastFinishedPulling="2026-01-30 11:09:59.627100564 +0000 UTC m=+3504.193404388" observedRunningTime="2026-01-30 11:10:00.230211318 +0000 UTC m=+3504.796515152" watchObservedRunningTime="2026-01-30 11:10:00.238292396 +0000 UTC m=+3504.804596220" Jan 30 11:10:00 crc kubenswrapper[4984]: I0130 11:10:00.254360 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f4zbk" podStartSLOduration=11.826195641 podStartE2EDuration="14.25433827s" podCreationTimestamp="2026-01-30 11:09:46 +0000 UTC" firstStartedPulling="2026-01-30 11:09:57.168600874 +0000 UTC m=+3501.734904718" lastFinishedPulling="2026-01-30 11:09:59.596743483 +0000 UTC m=+3504.163047347" observedRunningTime="2026-01-30 11:10:00.246660882 +0000 UTC m=+3504.812964706" watchObservedRunningTime="2026-01-30 11:10:00.25433827 +0000 UTC m=+3504.820642094" Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.512280 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.520232 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.795425 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.965526 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") pod \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.965583 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") pod \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.965953 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e19549b5-8918-4ff3-b266-67d6d2ef2c3f" (UID: "e19549b5-8918-4ff3-b266-67d6d2ef2c3f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.966173 4984 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.971669 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk" (OuterVolumeSpecName: "kube-api-access-jmqxk") pod "e19549b5-8918-4ff3-b266-67d6d2ef2c3f" (UID: "e19549b5-8918-4ff3-b266-67d6d2ef2c3f"). InnerVolumeSpecName "kube-api-access-jmqxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:10:04 crc kubenswrapper[4984]: I0130 11:10:04.067986 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:04 crc kubenswrapper[4984]: I0130 11:10:04.105846 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e19549b5-8918-4ff3-b266-67d6d2ef2c3f" path="/var/lib/kubelet/pods/e19549b5-8918-4ff3-b266-67d6d2ef2c3f/volumes" Jan 30 11:10:04 crc kubenswrapper[4984]: I0130 11:10:04.247424 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:10:06 crc kubenswrapper[4984]: I0130 11:10:06.622062 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:06 crc kubenswrapper[4984]: I0130 11:10:06.622778 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:06 crc kubenswrapper[4984]: I0130 11:10:06.711915 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:07 crc kubenswrapper[4984]: I0130 11:10:07.343712 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:07 crc kubenswrapper[4984]: I0130 11:10:07.612614 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:07 crc kubenswrapper[4984]: I0130 11:10:07.612668 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:07 crc kubenswrapper[4984]: I0130 11:10:07.677534 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:08 crc kubenswrapper[4984]: I0130 11:10:08.351110 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:09 crc kubenswrapper[4984]: I0130 11:10:09.703858 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:10:09 crc kubenswrapper[4984]: I0130 11:10:09.704357 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f4zbk" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="registry-server" containerID="cri-o://49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" gracePeriod=2 Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.222597 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311367 4984 generic.go:334] "Generic (PLEG): container finished" podID="481c11d6-78db-4d13-a7e1-a25934756df0" containerID="49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" exitCode=0 Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311415 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerDied","Data":"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667"} Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311444 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerDied","Data":"24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e"} Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311464 4984 scope.go:117] "RemoveContainer" containerID="49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311604 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.347924 4984 scope.go:117] "RemoveContainer" containerID="112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.368386 4984 scope.go:117] "RemoveContainer" containerID="db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.405987 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") pod \"481c11d6-78db-4d13-a7e1-a25934756df0\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.406160 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") pod \"481c11d6-78db-4d13-a7e1-a25934756df0\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.406294 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") pod \"481c11d6-78db-4d13-a7e1-a25934756df0\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.407744 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities" (OuterVolumeSpecName: "utilities") pod "481c11d6-78db-4d13-a7e1-a25934756df0" (UID: "481c11d6-78db-4d13-a7e1-a25934756df0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.409519 4984 scope.go:117] "RemoveContainer" containerID="49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" Jan 30 11:10:10 crc kubenswrapper[4984]: E0130 11:10:10.409935 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667\": container with ID starting with 49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667 not found: ID does not exist" containerID="49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.409969 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667"} err="failed to get container status \"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667\": rpc error: code = NotFound desc = could not find container \"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667\": container with ID starting with 49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667 not found: ID does not exist" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.409996 4984 scope.go:117] "RemoveContainer" containerID="112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e" Jan 30 11:10:10 crc kubenswrapper[4984]: E0130 11:10:10.410279 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e\": container with ID starting with 112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e not found: ID does not exist" containerID="112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.410311 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e"} err="failed to get container status \"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e\": rpc error: code = NotFound desc = could not find container \"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e\": container with ID starting with 112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e not found: ID does not exist" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.410325 4984 scope.go:117] "RemoveContainer" containerID="db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152" Jan 30 11:10:10 crc kubenswrapper[4984]: E0130 11:10:10.410677 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152\": container with ID starting with db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152 not found: ID does not exist" containerID="db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.410696 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152"} err="failed to get container status \"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152\": rpc error: code = NotFound desc = could not find container \"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152\": container with ID starting with db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152 not found: ID does not exist" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.413548 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7" (OuterVolumeSpecName: "kube-api-access-ppdx7") pod "481c11d6-78db-4d13-a7e1-a25934756df0" (UID: "481c11d6-78db-4d13-a7e1-a25934756df0"). InnerVolumeSpecName "kube-api-access-ppdx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.438280 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "481c11d6-78db-4d13-a7e1-a25934756df0" (UID: "481c11d6-78db-4d13-a7e1-a25934756df0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.508679 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.508710 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.508719 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.647992 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.655327 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.096517 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.100665 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cjkxq" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="registry-server" containerID="cri-o://7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787" gracePeriod=2 Jan 30 11:10:11 crc kubenswrapper[4984]: E0130 11:10:11.134505 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded097268_a63b_4ff5_ba86_a5717af5a2ad.slice/crio-7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787.scope\": RecentStats: unable to find data in memory cache]" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.337918 4984 generic.go:334] "Generic (PLEG): container finished" podID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerID="7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787" exitCode=0 Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.337979 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerDied","Data":"7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787"} Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.538540 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.630009 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") pod \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.630077 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") pod \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.630112 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") pod \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.631931 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities" (OuterVolumeSpecName: "utilities") pod "ed097268-a63b-4ff5-ba86-a5717af5a2ad" (UID: "ed097268-a63b-4ff5-ba86-a5717af5a2ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.635921 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n" (OuterVolumeSpecName: "kube-api-access-rx46n") pod "ed097268-a63b-4ff5-ba86-a5717af5a2ad" (UID: "ed097268-a63b-4ff5-ba86-a5717af5a2ad"). InnerVolumeSpecName "kube-api-access-rx46n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.683215 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed097268-a63b-4ff5-ba86-a5717af5a2ad" (UID: "ed097268-a63b-4ff5-ba86-a5717af5a2ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.732921 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.733159 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.733174 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.103048 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" path="/var/lib/kubelet/pods/481c11d6-78db-4d13-a7e1-a25934756df0/volumes" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.351096 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerDied","Data":"370a6116438d07c6ecca66252e5f79af0c94d4041ac1898cb8bc8db4b2616376"} Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.351155 4984 scope.go:117] "RemoveContainer" containerID="7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.351181 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.373109 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.380850 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.384514 4984 scope.go:117] "RemoveContainer" containerID="6fbec1b473389c2ea89dfc5a4a1d947f1b6cdf436469b3bbe3b0f3e2b70a0829" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.420398 4984 scope.go:117] "RemoveContainer" containerID="4f6fa029dafb176ca120c9a84e5bdb9255646ab3a812e7caee5db132d502df62" Jan 30 11:10:14 crc kubenswrapper[4984]: I0130 11:10:14.103165 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" path="/var/lib/kubelet/pods/ed097268-a63b-4ff5-ba86-a5717af5a2ad/volumes" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.562642 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563447 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563459 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563469 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="extract-utilities" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563476 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="extract-utilities" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563501 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="extract-content" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563507 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="extract-content" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563515 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563522 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563537 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="extract-utilities" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563544 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="extract-utilities" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563555 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="extract-content" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563560 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="extract-content" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563784 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563797 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.564794 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.567447 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fkd9b"/"default-dockercfg-xc97d" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.567523 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fkd9b"/"kube-root-ca.crt" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.567706 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fkd9b"/"openshift-service-ca.crt" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.574518 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.701638 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.701873 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.803611 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.803803 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.804388 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.824334 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.882574 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:54 crc kubenswrapper[4984]: I0130 11:10:54.432493 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:10:54 crc kubenswrapper[4984]: I0130 11:10:54.882307 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/must-gather-clm44" event={"ID":"5d446618-ad2a-4a27-a8f6-6afe185631c9","Type":"ContainerStarted","Data":"92e57bf5d4c242d464a31af3fe1fba7441c13337468fb1e746c40edbe2adfcf4"} Jan 30 11:10:55 crc kubenswrapper[4984]: I0130 11:10:55.895829 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/must-gather-clm44" event={"ID":"5d446618-ad2a-4a27-a8f6-6afe185631c9","Type":"ContainerStarted","Data":"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad"} Jan 30 11:10:55 crc kubenswrapper[4984]: I0130 11:10:55.896213 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/must-gather-clm44" event={"ID":"5d446618-ad2a-4a27-a8f6-6afe185631c9","Type":"ContainerStarted","Data":"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6"} Jan 30 11:10:55 crc kubenswrapper[4984]: I0130 11:10:55.929589 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fkd9b/must-gather-clm44" podStartSLOduration=2.413636158 podStartE2EDuration="2.929571344s" podCreationTimestamp="2026-01-30 11:10:53 +0000 UTC" firstStartedPulling="2026-01-30 11:10:54.46336454 +0000 UTC m=+3559.029668364" lastFinishedPulling="2026-01-30 11:10:54.979299686 +0000 UTC m=+3559.545603550" observedRunningTime="2026-01-30 11:10:55.927405806 +0000 UTC m=+3560.493709670" watchObservedRunningTime="2026-01-30 11:10:55.929571344 +0000 UTC m=+3560.495875168" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.211992 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-8v7mz"] Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.213872 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.251984 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.252059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.354365 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.354445 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.354529 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.384219 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.533580 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.936483 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" event={"ID":"fc71ca13-3f19-4c0e-8245-1656fc723d67","Type":"ContainerStarted","Data":"c4a7e615730dfe10a68536f14b62ed012291f023cbf341e261c7a8b5734a82e7"} Jan 30 11:11:11 crc kubenswrapper[4984]: I0130 11:11:11.036461 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" event={"ID":"fc71ca13-3f19-4c0e-8245-1656fc723d67","Type":"ContainerStarted","Data":"2e299e2d8be015f9e4c1acc3aab498b2e7d851fdde1fd71f21478452b5b784f0"} Jan 30 11:11:11 crc kubenswrapper[4984]: I0130 11:11:11.054241 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" podStartSLOduration=1.719969952 podStartE2EDuration="12.054222942s" podCreationTimestamp="2026-01-30 11:10:59 +0000 UTC" firstStartedPulling="2026-01-30 11:10:59.569597744 +0000 UTC m=+3564.135901568" lastFinishedPulling="2026-01-30 11:11:09.903850734 +0000 UTC m=+3574.470154558" observedRunningTime="2026-01-30 11:11:11.048305562 +0000 UTC m=+3575.614609396" watchObservedRunningTime="2026-01-30 11:11:11.054222942 +0000 UTC m=+3575.620526766" Jan 30 11:11:33 crc kubenswrapper[4984]: I0130 11:11:33.001198 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:11:33 crc kubenswrapper[4984]: I0130 11:11:33.001696 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:11:51 crc kubenswrapper[4984]: I0130 11:11:51.420163 4984 generic.go:334] "Generic (PLEG): container finished" podID="fc71ca13-3f19-4c0e-8245-1656fc723d67" containerID="2e299e2d8be015f9e4c1acc3aab498b2e7d851fdde1fd71f21478452b5b784f0" exitCode=0 Jan 30 11:11:51 crc kubenswrapper[4984]: I0130 11:11:51.420245 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" event={"ID":"fc71ca13-3f19-4c0e-8245-1656fc723d67","Type":"ContainerDied","Data":"2e299e2d8be015f9e4c1acc3aab498b2e7d851fdde1fd71f21478452b5b784f0"} Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.541950 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.590953 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") pod \"fc71ca13-3f19-4c0e-8245-1656fc723d67\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.591089 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host" (OuterVolumeSpecName: "host") pod "fc71ca13-3f19-4c0e-8245-1656fc723d67" (UID: "fc71ca13-3f19-4c0e-8245-1656fc723d67"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.591366 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") pod \"fc71ca13-3f19-4c0e-8245-1656fc723d67\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.592014 4984 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.600318 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-8v7mz"] Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.602973 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx" (OuterVolumeSpecName: "kube-api-access-z6lkx") pod "fc71ca13-3f19-4c0e-8245-1656fc723d67" (UID: "fc71ca13-3f19-4c0e-8245-1656fc723d67"). InnerVolumeSpecName "kube-api-access-z6lkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.609939 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-8v7mz"] Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.693459 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.439973 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4a7e615730dfe10a68536f14b62ed012291f023cbf341e261c7a8b5734a82e7" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.440120 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.783190 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-2nn68"] Jan 30 11:11:53 crc kubenswrapper[4984]: E0130 11:11:53.783610 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc71ca13-3f19-4c0e-8245-1656fc723d67" containerName="container-00" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.783624 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc71ca13-3f19-4c0e-8245-1656fc723d67" containerName="container-00" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.783803 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc71ca13-3f19-4c0e-8245-1656fc723d67" containerName="container-00" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.784706 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.926354 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.926537 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.028897 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.029088 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.029151 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.053841 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.105520 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.111115 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc71ca13-3f19-4c0e-8245-1656fc723d67" path="/var/lib/kubelet/pods/fc71ca13-3f19-4c0e-8245-1656fc723d67/volumes" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.455741 4984 generic.go:334] "Generic (PLEG): container finished" podID="004731e7-03ce-4a34-919a-3cfcc05195a4" containerID="0fcfb36280e7fbeb4a457398a6d3612434afcbb9929764166a0472904552ce68" exitCode=0 Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.455836 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" event={"ID":"004731e7-03ce-4a34-919a-3cfcc05195a4","Type":"ContainerDied","Data":"0fcfb36280e7fbeb4a457398a6d3612434afcbb9929764166a0472904552ce68"} Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.456280 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" event={"ID":"004731e7-03ce-4a34-919a-3cfcc05195a4","Type":"ContainerStarted","Data":"46c4bd630060d2892d1426c3630934eeec19495d3caf7a251009deb5ba67ff3a"} Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.969107 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-2nn68"] Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.977088 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-2nn68"] Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.555428 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.661606 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") pod \"004731e7-03ce-4a34-919a-3cfcc05195a4\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.662018 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") pod \"004731e7-03ce-4a34-919a-3cfcc05195a4\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.662133 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host" (OuterVolumeSpecName: "host") pod "004731e7-03ce-4a34-919a-3cfcc05195a4" (UID: "004731e7-03ce-4a34-919a-3cfcc05195a4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.662923 4984 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.669800 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4" (OuterVolumeSpecName: "kube-api-access-swcn4") pod "004731e7-03ce-4a34-919a-3cfcc05195a4" (UID: "004731e7-03ce-4a34-919a-3cfcc05195a4"). InnerVolumeSpecName "kube-api-access-swcn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.764602 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.108516 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="004731e7-03ce-4a34-919a-3cfcc05195a4" path="/var/lib/kubelet/pods/004731e7-03ce-4a34-919a-3cfcc05195a4/volumes" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.220421 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-47684"] Jan 30 11:11:56 crc kubenswrapper[4984]: E0130 11:11:56.220988 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004731e7-03ce-4a34-919a-3cfcc05195a4" containerName="container-00" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.221019 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="004731e7-03ce-4a34-919a-3cfcc05195a4" containerName="container-00" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.221385 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="004731e7-03ce-4a34-919a-3cfcc05195a4" containerName="container-00" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.222335 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.275283 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.275507 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.378479 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.378725 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.378728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.411067 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.479655 4984 scope.go:117] "RemoveContainer" containerID="0fcfb36280e7fbeb4a457398a6d3612434afcbb9929764166a0472904552ce68" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.479773 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.539196 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: W0130 11:11:56.584670 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71550211_cb32_4484_9ebf_6ea10af9bf54.slice/crio-c626f31280c7443cca045386db361005dfd2d4ad302edf7c704a2836075c6ca8 WatchSource:0}: Error finding container c626f31280c7443cca045386db361005dfd2d4ad302edf7c704a2836075c6ca8: Status 404 returned error can't find the container with id c626f31280c7443cca045386db361005dfd2d4ad302edf7c704a2836075c6ca8 Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.494710 4984 generic.go:334] "Generic (PLEG): container finished" podID="71550211-cb32-4484-9ebf-6ea10af9bf54" containerID="11137b0f1e7bdd58c1eab12c664b2e9087ca7e616952c8a2fd6a95aa242a172b" exitCode=0 Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.494812 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-47684" event={"ID":"71550211-cb32-4484-9ebf-6ea10af9bf54","Type":"ContainerDied","Data":"11137b0f1e7bdd58c1eab12c664b2e9087ca7e616952c8a2fd6a95aa242a172b"} Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.495354 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-47684" event={"ID":"71550211-cb32-4484-9ebf-6ea10af9bf54","Type":"ContainerStarted","Data":"c626f31280c7443cca045386db361005dfd2d4ad302edf7c704a2836075c6ca8"} Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.554676 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-47684"] Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.574171 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-47684"] Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.622131 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.732766 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") pod \"71550211-cb32-4484-9ebf-6ea10af9bf54\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.732901 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") pod \"71550211-cb32-4484-9ebf-6ea10af9bf54\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.732969 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host" (OuterVolumeSpecName: "host") pod "71550211-cb32-4484-9ebf-6ea10af9bf54" (UID: "71550211-cb32-4484-9ebf-6ea10af9bf54"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.733578 4984 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.737331 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj" (OuterVolumeSpecName: "kube-api-access-wlbvj") pod "71550211-cb32-4484-9ebf-6ea10af9bf54" (UID: "71550211-cb32-4484-9ebf-6ea10af9bf54"). InnerVolumeSpecName "kube-api-access-wlbvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.835470 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:59 crc kubenswrapper[4984]: I0130 11:11:59.517616 4984 scope.go:117] "RemoveContainer" containerID="11137b0f1e7bdd58c1eab12c664b2e9087ca7e616952c8a2fd6a95aa242a172b" Jan 30 11:11:59 crc kubenswrapper[4984]: I0130 11:11:59.517674 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:12:00 crc kubenswrapper[4984]: I0130 11:12:00.102134 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71550211-cb32-4484-9ebf-6ea10af9bf54" path="/var/lib/kubelet/pods/71550211-cb32-4484-9ebf-6ea10af9bf54/volumes" Jan 30 11:12:03 crc kubenswrapper[4984]: I0130 11:12:03.000601 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:12:03 crc kubenswrapper[4984]: I0130 11:12:03.000982 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:12:12 crc kubenswrapper[4984]: I0130 11:12:12.905765 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cfd8d5fd8-lwgk4_217935e2-7a1e-44a6-b6fd-e64c41155d6d/barbican-api/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.054348 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cfd8d5fd8-lwgk4_217935e2-7a1e-44a6-b6fd-e64c41155d6d/barbican-api-log/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.111999 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75ff98474b-zm29s_1368411d-c934-4d15-a67b-dc840dbe010d/barbican-keystone-listener/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.178894 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75ff98474b-zm29s_1368411d-c934-4d15-a67b-dc840dbe010d/barbican-keystone-listener-log/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.285739 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-664bd6b5fc-shfjg_aa6393c8-34de-43fc-9a00-a0f87b31d8e8/barbican-worker-log/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.309153 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-664bd6b5fc-shfjg_aa6393c8-34de-43fc-9a00-a0f87b31d8e8/barbican-worker/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.454488 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd_ba20d4a0-7acc-4813-8fa9-6f166802bd04/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.511550 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aa8fceae-cb31-48dd-8104-9a905f788af6/ceilometer-central-agent/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.600923 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aa8fceae-cb31-48dd-8104-9a905f788af6/ceilometer-notification-agent/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.656154 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aa8fceae-cb31-48dd-8104-9a905f788af6/proxy-httpd/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.691739 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aa8fceae-cb31-48dd-8104-9a905f788af6/sg-core/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.824558 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a8d6abba-9a6d-4a99-a68b-659c1e111893/cinder-api/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.880044 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a8d6abba-9a6d-4a99-a68b-659c1e111893/cinder-api-log/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.964945 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4ced7140-d346-43c7-9139-7f460af079e2/cinder-scheduler/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.012148 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4ced7140-d346-43c7-9139-7f460af079e2/probe/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.135878 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg_ed90c997-eddb-4afb-ae0d-31dd3ef4c485/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.244550 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-blm26_5ca6f868-9db4-483a-bea5-dc471b160721/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.374189 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-fvwt9_f3033afa-9ac2-4f32-a02d-372dcdbeb984/init/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.502466 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-fvwt9_f3033afa-9ac2-4f32-a02d-372dcdbeb984/init/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.563413 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-fvwt9_f3033afa-9ac2-4f32-a02d-372dcdbeb984/dnsmasq-dns/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.652808 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zfts7_8414dabf-1fa1-4a4c-8db5-55ef7397164d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.782846 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2fa01bff-d884-4b1f-b0c2-8c0fbd957a30/glance-log/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.804656 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2fa01bff-d884-4b1f-b0c2-8c0fbd957a30/glance-httpd/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.949866 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_96bc5a16-54a8-4008-98ea-3adb9b24e9fa/glance-httpd/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.955579 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_96bc5a16-54a8-4008-98ea-3adb9b24e9fa/glance-log/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.159957 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb76cb6cb-wtx8d_d1c7d24e-f131-485d-aaec-80a94d7ddd96/horizon/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.250801 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9_908eb334-fac2-41ed-96d6-d7c80f8e98b3/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.415784 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb76cb6cb-wtx8d_d1c7d24e-f131-485d-aaec-80a94d7ddd96/horizon-log/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.417433 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6cgx8_875c90f8-2855-43ce-993f-fa64c7d92c66/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.565775 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496181-vtxhk_a5d9b60c-98e5-4132-9193-0b13ac2893a5/keystone-cron/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.708648 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-9fd9687b7-kdppr_0cddf025-bb36-4984-82b8-360ab9f3d91c/keystone-api/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.730892 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa/kube-state-metrics/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.931758 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm_d3ca7cba-514d-4761-821d-9b48578f0cc3/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.276074 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5565c8d7-xqnh6_0e442774-b2c1-418a-a5b2-edfd20f23c27/neutron-httpd/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.284632 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5565c8d7-xqnh6_0e442774-b2c1-418a-a5b2-edfd20f23c27/neutron-api/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.456699 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp_4549607f-18ca-42e1-8c2b-b7d9793e2005/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.908171 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4d02a683-2231-4e04-89bb-748baf8bc65d/nova-cell0-conductor-conductor/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.964031 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2a8b1830-c479-4612-a461-7cb46d2c949f/nova-api-log/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.098645 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2a8b1830-c479-4612-a461-7cb46d2c949f/nova-api-api/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.263283 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5b097926-177e-428a-a271-ede45f90f7d6/nova-cell1-conductor-conductor/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.270888 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3933f23e-210c-483f-82ec-eb0cdbc09f4c/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.417832 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lrcvm_eaa18315-192f-412f-b94c-708c98209a5a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.516747 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0538ab81-6e35-473d-860f-7f680671646d/nova-metadata-log/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.847963 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_66296a3e-33af-496f-a870-9d0932aa4178/mysql-bootstrap/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.860949 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d79d0dc1-f229-4dd7-9d7c-a0e420d6452d/nova-scheduler-scheduler/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.092881 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_66296a3e-33af-496f-a870-9d0932aa4178/mysql-bootstrap/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.095549 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_66296a3e-33af-496f-a870-9d0932aa4178/galera/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.287346 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4717968-368b-4b9d-acca-b2aee21abd1f/mysql-bootstrap/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.493118 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4717968-368b-4b9d-acca-b2aee21abd1f/mysql-bootstrap/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.555366 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4717968-368b-4b9d-acca-b2aee21abd1f/galera/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.663365 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_141e094b-e8c8-4a61-b93c-8dec5ac89823/openstackclient/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.669966 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0538ab81-6e35-473d-860f-7f680671646d/nova-metadata-metadata/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.787195 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-m4spx_63184ee8-263b-4506-8844-4ae4fd2a80c7/ovn-controller/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.920001 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ms66d_dbcc0b77-42fd-47ec-9b91-94e2c070c0ec/openstack-network-exporter/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.029643 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-js4wt_c2590fda-d6e0-4182-96ef-8326001108d9/ovsdb-server-init/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.202276 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-js4wt_c2590fda-d6e0-4182-96ef-8326001108d9/ovsdb-server/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.240988 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-js4wt_c2590fda-d6e0-4182-96ef-8326001108d9/ovsdb-server-init/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.253519 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-js4wt_c2590fda-d6e0-4182-96ef-8326001108d9/ovs-vswitchd/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.433026 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-57hv6_2f986324-c570-4c65-aed1-952aa2538af8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.469386 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e86681f0-5ba9-45f2-b0b7-0b9a49dc6706/openstack-network-exporter/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.506384 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e86681f0-5ba9-45f2-b0b7-0b9a49dc6706/ovn-northd/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.627600 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4/openstack-network-exporter/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.657428 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4/ovsdbserver-nb/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.826834 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_53bd6a11-6ac6-4b0e-ae41-8afd88f351e6/openstack-network-exporter/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.860562 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_53bd6a11-6ac6-4b0e-ae41-8afd88f351e6/ovsdbserver-sb/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.035388 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68474f84b8-6pzwt_34cd991a-90cf-410c-828d-db99caf6dcea/placement-api/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.070032 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68474f84b8-6pzwt_34cd991a-90cf-410c-828d-db99caf6dcea/placement-log/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.129479 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92837592-8d1a-4eec-9c06-1d906b4724c2/setup-container/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.379485 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92837592-8d1a-4eec-9c06-1d906b4724c2/setup-container/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.392195 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92837592-8d1a-4eec-9c06-1d906b4724c2/rabbitmq/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.392952 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_137801a7-4625-4c4c-a855-8ecdf65e509a/setup-container/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.579998 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_137801a7-4625-4c4c-a855-8ecdf65e509a/setup-container/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.616822 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_137801a7-4625-4c4c-a855-8ecdf65e509a/rabbitmq/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.695178 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45_b6b5ab38-6c9b-4526-bbee-d3a4c460ea78/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.804153 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-pgdm4_049a948c-1945-4217-b728-7f39570dd740/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.949151 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn_1985e15d-70be-4079-bd48-55c782dfcba7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.051792 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7fxcn_b337ec46-c5ba-4b83-91f7-ad4b826d9595/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.145079 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ds8rj_1e567c3d-d9b0-4be3-ad02-21a342ce33fd/ssh-known-hosts-edpm-deployment/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.417541 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f6d8f475-hmb99_a88ca399-adf6-4df4-8216-84de7603712b/proxy-server/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.438029 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f6d8f475-hmb99_a88ca399-adf6-4df4-8216-84de7603712b/proxy-httpd/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.613151 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-j9rvs_7cfe4feb-b1bb-4904-9955-c5833ef34e9e/swift-ring-rebalance/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.630763 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/account-auditor/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.695118 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/account-reaper/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.847294 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/container-auditor/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.877128 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/account-replicator/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.882971 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/account-server/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.933974 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/container-replicator/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.023868 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/container-server/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.087662 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-auditor/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.117101 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/container-updater/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.149179 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-expirer/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.265736 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-replicator/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.272063 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-server/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.352991 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-updater/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.422532 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/rsync/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.492319 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/swift-recon-cron/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.655776 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-npmxf_2498ca77-0e58-4af1-b59d-c19e6b11f2f9/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.691219 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2281d2df-38c2-4c96-bff0-09cf745f1e50/tempest-tests-tempest-tests-runner/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.836550 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d46e480c-151c-4f4c-a1c8-bbad4b31d37b/test-operator-logs-container/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.898638 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5_d0aef065-96aa-4cd6-9069-627c5f97fcc3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:32 crc kubenswrapper[4984]: I0130 11:12:32.426609 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ab30531b-1df7-460e-956c-bc849792098b/memcached/0.log" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.000480 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.000537 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.000582 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.001132 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.001190 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" gracePeriod=600 Jan 30 11:12:33 crc kubenswrapper[4984]: E0130 11:12:33.121036 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.804021 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" exitCode=0 Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.804112 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed"} Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.804430 4984 scope.go:117] "RemoveContainer" containerID="d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.805229 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:12:33 crc kubenswrapper[4984]: E0130 11:12:33.805606 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:12:45 crc kubenswrapper[4984]: I0130 11:12:45.091838 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:12:45 crc kubenswrapper[4984]: E0130 11:12:45.092801 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.561567 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-sxpfj_5d977367-099f-4a10-bf37-9e9cd913932e/manager/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.677239 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-cnxbk_74bafe89-dc08-4029-823c-f0c3579b8d6b/manager/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.756289 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/util/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.892971 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/util/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.961378 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/pull/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.974360 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/pull/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.172330 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/pull/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.191994 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/util/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.203055 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/extract/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.316218 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-b674n_8c70fc0b-a348-4dcd-8fc3-9afa1c22318e/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.409918 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-tjfpn_254d2d7e-3636-429d-b043-501d76db73e9/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.495168 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-zl2fj_5e7c3856-3562-4cb4-b131-48302c43ce25/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.628638 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-zzd6d_7a6dd1f5-d0b6-49a6-9270-dd98f2147932/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.792764 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-8hrrf_3899fe05-64bb-48b9-88dc-2341ad9bc00b/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.889701 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-t5j55_e420c57f-7248-4454-926f-48766e48236c/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.033340 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-zwc2t_dd895dbf-b809-498c-95fd-dfd09a9eeb4d/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.047970 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-2wvrh_739ed1d4-c090-4166-9352-d048e0b281d6/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.237058 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-t75dn_67a8ae49-7f19-47bc-8e54-0873c535f6ff/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.299800 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-2tbcn_1d30b9a6-fe73-4e32-9095-65b1950f7afe/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.472719 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-gcbx5_ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.485467 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-sh7cp_c6ee91ae-9b91-46a7-ad2a-c67133a4f40e/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.644923 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4df2g45_8d22f0a7-a541-405b-8146-fb098d02ddcc/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.763431 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7d4ff8bbbc-68r69_f4b80c7c-3e81-48d4-862c-684369655891/operator/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.933453 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nqgjv_be54871d-c3f5-40bc-b6cd-63602755ca51/registry-server/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.250141 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-28kkh_bb50c219-6036-48d0-8568-0a1601150272/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.252982 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-fx6t9_69e058b7-deda-4eb8-9cac-6bc08032b3bf/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.532766 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vpt86_e8bf6651-ff58-478c-be28-39732dac675b/operator/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.618958 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-jvcvp_c3eec896-3441-4b0e-a7e5-4bde717dbccd/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.824032 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-r7hs4_df5d4f32-b49b-46ea-8aac-a3b76b2f8f00/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.901200 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-4lz58_350834d1-9352-4ca5-9c8a-acf60193ebc8/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.964327 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8d786f48c-jtznv_87613c07-d864-4440-b31c-03c4bb3f8ce0/manager/0.log" Jan 30 11:12:51 crc kubenswrapper[4984]: I0130 11:12:51.051797 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-h7pcb_9a53674a-07ad-4bfc-80c8-f55bcc286eb0/manager/0.log" Jan 30 11:12:58 crc kubenswrapper[4984]: I0130 11:12:58.090657 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:12:58 crc kubenswrapper[4984]: E0130 11:12:58.091437 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:10 crc kubenswrapper[4984]: I0130 11:13:10.096863 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:10 crc kubenswrapper[4984]: E0130 11:13:10.097568 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:11 crc kubenswrapper[4984]: I0130 11:13:11.627921 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-g5m7t_3c2dcd5a-96f0-48ff-a004-9764d24b66b1/control-plane-machine-set-operator/0.log" Jan 30 11:13:11 crc kubenswrapper[4984]: I0130 11:13:11.744379 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b9k4d_218f0398-9175-448b-83b8-6445e2c3df37/kube-rbac-proxy/0.log" Jan 30 11:13:11 crc kubenswrapper[4984]: I0130 11:13:11.790185 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b9k4d_218f0398-9175-448b-83b8-6445e2c3df37/machine-api-operator/0.log" Jan 30 11:13:22 crc kubenswrapper[4984]: I0130 11:13:22.090422 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:22 crc kubenswrapper[4984]: E0130 11:13:22.091706 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:25 crc kubenswrapper[4984]: I0130 11:13:25.302450 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-rlb95_f1c83115-1333-4064-8217-eb2edae57d74/cert-manager-controller/0.log" Jan 30 11:13:25 crc kubenswrapper[4984]: I0130 11:13:25.454125 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-2f5gm_c7557472-15a5-48a9-8a84-bd8478d45a4b/cert-manager-cainjector/0.log" Jan 30 11:13:25 crc kubenswrapper[4984]: I0130 11:13:25.510281 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-r7gsp_4a218ad6-abfb-49ac-9f07-a79d9f3bd07e/cert-manager-webhook/0.log" Jan 30 11:13:34 crc kubenswrapper[4984]: I0130 11:13:34.090433 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:34 crc kubenswrapper[4984]: E0130 11:13:34.091323 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.340624 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mpwzb_471cb540-b50e-4adb-8984-65c46a7f9714/nmstate-console-plugin/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.520039 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vh6vz_88dac402-7307-465d-b5a0-61762ee570c6/nmstate-handler/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.521481 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7x2rq_f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf/kube-rbac-proxy/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.612032 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7x2rq_f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf/nmstate-metrics/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.719313 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tl42h_ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b/nmstate-operator/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.790774 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-gnkrh_739c7b03-ba6e-48de-a07b-6bd4206c206f/nmstate-webhook/0.log" Jan 30 11:13:46 crc kubenswrapper[4984]: I0130 11:13:46.095673 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:46 crc kubenswrapper[4984]: E0130 11:13:46.096396 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:58 crc kubenswrapper[4984]: I0130 11:13:58.107849 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:58 crc kubenswrapper[4984]: E0130 11:13:58.108772 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.126918 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4tngn_2ae05bf6-d99c-4fb1-9780-20249ec78e1e/kube-rbac-proxy/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.190272 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4tngn_2ae05bf6-d99c-4fb1-9780-20249ec78e1e/controller/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.328385 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-j62qw_7e54bb11-7cfb-4840-b861-bd6d184c36f4/frr-k8s-webhook-server/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.385206 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-frr-files/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.553901 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-metrics/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.619821 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-frr-files/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.619962 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-reloader/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.621435 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-reloader/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.764833 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-frr-files/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.807092 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-metrics/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.860935 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-reloader/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.870154 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-metrics/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.053127 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-metrics/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.063407 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/controller/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.077699 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-reloader/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.081997 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-frr-files/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.256672 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/kube-rbac-proxy-frr/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.305775 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/frr-metrics/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.321887 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/kube-rbac-proxy/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.447275 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/reloader/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.544333 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-f5cdbcd49-plrfk_fb5cf2c1-4334-4aee-9f94-2f1c2797b484/manager/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.754595 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86d4db4f7b-qz6m4_b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05/webhook-server/0.log" Jan 30 11:14:08 crc kubenswrapper[4984]: I0130 11:14:08.176593 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wc8c7_07684256-0759-426a-9ba0-40514aa3e7ac/kube-rbac-proxy/0.log" Jan 30 11:14:08 crc kubenswrapper[4984]: I0130 11:14:08.503496 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/frr/0.log" Jan 30 11:14:08 crc kubenswrapper[4984]: I0130 11:14:08.569776 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wc8c7_07684256-0759-426a-9ba0-40514aa3e7ac/speaker/0.log" Jan 30 11:14:09 crc kubenswrapper[4984]: I0130 11:14:09.090847 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:14:09 crc kubenswrapper[4984]: E0130 11:14:09.091317 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:14:21 crc kubenswrapper[4984]: I0130 11:14:21.091144 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:14:21 crc kubenswrapper[4984]: E0130 11:14:21.092333 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.535897 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/util/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.626353 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/pull/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.629882 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/util/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.755898 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/pull/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.921517 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/util/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.923409 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/extract/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.031608 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/pull/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.166938 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/util/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.360389 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/pull/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.408961 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/util/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.557410 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/pull/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.527077 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/extract/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.531177 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/util/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.532557 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/pull/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.713219 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-utilities/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.913772 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-utilities/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.944825 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-content/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.959188 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.146556 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-utilities/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.182385 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.378393 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-utilities/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.541674 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.559378 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-utilities/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.639400 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.849375 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.856391 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-utilities/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.873389 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/registry-server/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.439028 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tttcx_ed0e4098-37d9-4094-99d0-1892881696ad/marketplace-operator/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.459165 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/registry-server/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.489412 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-utilities/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.624911 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-utilities/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.642511 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-content/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.662682 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-content/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.840101 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-content/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.885503 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-utilities/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.905808 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-utilities/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.014223 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/registry-server/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.059516 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-content/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.108094 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-utilities/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.132520 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-content/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.297475 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-utilities/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.317261 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-content/0.log" Jan 30 11:14:29 crc kubenswrapper[4984]: I0130 11:14:29.108071 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/registry-server/0.log" Jan 30 11:14:35 crc kubenswrapper[4984]: I0130 11:14:35.090847 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:14:35 crc kubenswrapper[4984]: E0130 11:14:35.091690 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:14:50 crc kubenswrapper[4984]: I0130 11:14:50.091707 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:14:50 crc kubenswrapper[4984]: E0130 11:14:50.093022 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.167610 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv"] Jan 30 11:15:00 crc kubenswrapper[4984]: E0130 11:15:00.168933 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71550211-cb32-4484-9ebf-6ea10af9bf54" containerName="container-00" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.168951 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="71550211-cb32-4484-9ebf-6ea10af9bf54" containerName="container-00" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.169207 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="71550211-cb32-4484-9ebf-6ea10af9bf54" containerName="container-00" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.169930 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.172245 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.172985 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.180359 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv"] Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.187838 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.188113 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.188184 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.290028 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.290112 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.290293 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.291132 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.295715 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.308852 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.491927 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:01 crc kubenswrapper[4984]: I0130 11:15:01.008280 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv"] Jan 30 11:15:01 crc kubenswrapper[4984]: I0130 11:15:01.224381 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" event={"ID":"411c5cf2-35bd-4df8-afbd-117cc0c2e785","Type":"ContainerStarted","Data":"c2707a5bb73729166e32cd080c31f04f1da0df9767101d86be749adb56c4a63e"} Jan 30 11:15:01 crc kubenswrapper[4984]: I0130 11:15:01.224639 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" event={"ID":"411c5cf2-35bd-4df8-afbd-117cc0c2e785","Type":"ContainerStarted","Data":"d3fb0854f8cf173c0bad1b0d2314531b92035fc1f6b21c103ed37b14eb61932f"} Jan 30 11:15:01 crc kubenswrapper[4984]: I0130 11:15:01.240553 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" podStartSLOduration=1.240534683 podStartE2EDuration="1.240534683s" podCreationTimestamp="2026-01-30 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 11:15:01.236460443 +0000 UTC m=+3805.802764267" watchObservedRunningTime="2026-01-30 11:15:01.240534683 +0000 UTC m=+3805.806838507" Jan 30 11:15:02 crc kubenswrapper[4984]: I0130 11:15:02.090321 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:02 crc kubenswrapper[4984]: E0130 11:15:02.090672 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:02 crc kubenswrapper[4984]: I0130 11:15:02.235696 4984 generic.go:334] "Generic (PLEG): container finished" podID="411c5cf2-35bd-4df8-afbd-117cc0c2e785" containerID="c2707a5bb73729166e32cd080c31f04f1da0df9767101d86be749adb56c4a63e" exitCode=0 Jan 30 11:15:02 crc kubenswrapper[4984]: I0130 11:15:02.236579 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" event={"ID":"411c5cf2-35bd-4df8-afbd-117cc0c2e785","Type":"ContainerDied","Data":"c2707a5bb73729166e32cd080c31f04f1da0df9767101d86be749adb56c4a63e"} Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.665613 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.761600 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") pod \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.761879 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") pod \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.761988 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") pod \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.762594 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume" (OuterVolumeSpecName: "config-volume") pod "411c5cf2-35bd-4df8-afbd-117cc0c2e785" (UID: "411c5cf2-35bd-4df8-afbd-117cc0c2e785"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.768971 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n" (OuterVolumeSpecName: "kube-api-access-k6d4n") pod "411c5cf2-35bd-4df8-afbd-117cc0c2e785" (UID: "411c5cf2-35bd-4df8-afbd-117cc0c2e785"). InnerVolumeSpecName "kube-api-access-k6d4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.777354 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "411c5cf2-35bd-4df8-afbd-117cc0c2e785" (UID: "411c5cf2-35bd-4df8-afbd-117cc0c2e785"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.864366 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.864405 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.864430 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") on node \"crc\" DevicePath \"\"" Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.258305 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" event={"ID":"411c5cf2-35bd-4df8-afbd-117cc0c2e785","Type":"ContainerDied","Data":"d3fb0854f8cf173c0bad1b0d2314531b92035fc1f6b21c103ed37b14eb61932f"} Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.258737 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3fb0854f8cf173c0bad1b0d2314531b92035fc1f6b21c103ed37b14eb61932f" Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.258802 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.352931 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.360936 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 11:15:06 crc kubenswrapper[4984]: I0130 11:15:06.108230 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c13999b-7269-403d-8be6-78d42f65f26c" path="/var/lib/kubelet/pods/5c13999b-7269-403d-8be6-78d42f65f26c/volumes" Jan 30 11:15:17 crc kubenswrapper[4984]: I0130 11:15:17.091158 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:17 crc kubenswrapper[4984]: E0130 11:15:17.092483 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:24 crc kubenswrapper[4984]: I0130 11:15:24.587616 4984 scope.go:117] "RemoveContainer" containerID="673987907c6890a3da91b3b133a9ad126ca5110425aedf8c5b019ce181470176" Jan 30 11:15:28 crc kubenswrapper[4984]: I0130 11:15:28.091158 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:28 crc kubenswrapper[4984]: E0130 11:15:28.091831 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:41 crc kubenswrapper[4984]: I0130 11:15:41.090660 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:41 crc kubenswrapper[4984]: E0130 11:15:41.091659 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:56 crc kubenswrapper[4984]: I0130 11:15:56.098558 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:56 crc kubenswrapper[4984]: E0130 11:15:56.099348 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:16:11 crc kubenswrapper[4984]: I0130 11:16:11.090069 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:16:11 crc kubenswrapper[4984]: E0130 11:16:11.090987 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:16:12 crc kubenswrapper[4984]: I0130 11:16:12.106581 4984 generic.go:334] "Generic (PLEG): container finished" podID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" exitCode=0 Jan 30 11:16:12 crc kubenswrapper[4984]: I0130 11:16:12.106843 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/must-gather-clm44" event={"ID":"5d446618-ad2a-4a27-a8f6-6afe185631c9","Type":"ContainerDied","Data":"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6"} Jan 30 11:16:12 crc kubenswrapper[4984]: I0130 11:16:12.107553 4984 scope.go:117] "RemoveContainer" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" Jan 30 11:16:13 crc kubenswrapper[4984]: I0130 11:16:13.104568 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fkd9b_must-gather-clm44_5d446618-ad2a-4a27-a8f6-6afe185631c9/gather/0.log" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.101135 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.102088 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-fkd9b/must-gather-clm44" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="copy" containerID="cri-o://ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" gracePeriod=2 Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.116962 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.581337 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fkd9b_must-gather-clm44_5d446618-ad2a-4a27-a8f6-6afe185631c9/copy/0.log" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.582056 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.657406 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") pod \"5d446618-ad2a-4a27-a8f6-6afe185631c9\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.657611 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") pod \"5d446618-ad2a-4a27-a8f6-6afe185631c9\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.662861 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx" (OuterVolumeSpecName: "kube-api-access-r75gx") pod "5d446618-ad2a-4a27-a8f6-6afe185631c9" (UID: "5d446618-ad2a-4a27-a8f6-6afe185631c9"). InnerVolumeSpecName "kube-api-access-r75gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.760022 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") on node \"crc\" DevicePath \"\"" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.795934 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5d446618-ad2a-4a27-a8f6-6afe185631c9" (UID: "5d446618-ad2a-4a27-a8f6-6afe185631c9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.861651 4984 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.107552 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" path="/var/lib/kubelet/pods/5d446618-ad2a-4a27-a8f6-6afe185631c9/volumes" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.223472 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fkd9b_must-gather-clm44_5d446618-ad2a-4a27-a8f6-6afe185631c9/copy/0.log" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.223917 4984 generic.go:334] "Generic (PLEG): container finished" podID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerID="ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" exitCode=143 Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.223980 4984 scope.go:117] "RemoveContainer" containerID="ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.223996 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.245033 4984 scope.go:117] "RemoveContainer" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.319487 4984 scope.go:117] "RemoveContainer" containerID="ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" Jan 30 11:16:22 crc kubenswrapper[4984]: E0130 11:16:22.320647 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad\": container with ID starting with ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad not found: ID does not exist" containerID="ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.320709 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad"} err="failed to get container status \"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad\": rpc error: code = NotFound desc = could not find container \"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad\": container with ID starting with ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad not found: ID does not exist" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.320744 4984 scope.go:117] "RemoveContainer" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" Jan 30 11:16:22 crc kubenswrapper[4984]: E0130 11:16:22.324519 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6\": container with ID starting with dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6 not found: ID does not exist" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.324587 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6"} err="failed to get container status \"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6\": rpc error: code = NotFound desc = could not find container \"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6\": container with ID starting with dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6 not found: ID does not exist" Jan 30 11:16:24 crc kubenswrapper[4984]: I0130 11:16:24.090541 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:16:24 crc kubenswrapper[4984]: E0130 11:16:24.091388 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:16:39 crc kubenswrapper[4984]: I0130 11:16:39.091024 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:16:39 crc kubenswrapper[4984]: E0130 11:16:39.092089 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:16:53 crc kubenswrapper[4984]: I0130 11:16:53.091290 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:16:53 crc kubenswrapper[4984]: E0130 11:16:53.092242 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:17:07 crc kubenswrapper[4984]: I0130 11:17:07.091962 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:17:07 crc kubenswrapper[4984]: E0130 11:17:07.096517 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:17:20 crc kubenswrapper[4984]: I0130 11:17:20.091238 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:17:20 crc kubenswrapper[4984]: E0130 11:17:20.093857 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:17:24 crc kubenswrapper[4984]: I0130 11:17:24.723441 4984 scope.go:117] "RemoveContainer" containerID="2e299e2d8be015f9e4c1acc3aab498b2e7d851fdde1fd71f21478452b5b784f0" Jan 30 11:17:35 crc kubenswrapper[4984]: I0130 11:17:35.091573 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:17:36 crc kubenswrapper[4984]: I0130 11:17:36.026939 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"bd836a9bd18698fedb1d2813808a345c2079c53385cfda390de4be0312d43024"} Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.696339 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:17:55 crc kubenswrapper[4984]: E0130 11:17:55.697370 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="copy" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697474 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="copy" Jan 30 11:17:55 crc kubenswrapper[4984]: E0130 11:17:55.697512 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411c5cf2-35bd-4df8-afbd-117cc0c2e785" containerName="collect-profiles" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697520 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="411c5cf2-35bd-4df8-afbd-117cc0c2e785" containerName="collect-profiles" Jan 30 11:17:55 crc kubenswrapper[4984]: E0130 11:17:55.697542 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="gather" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697550 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="gather" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697761 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="gather" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697773 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="copy" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697787 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="411c5cf2-35bd-4df8-afbd-117cc0c2e785" containerName="collect-profiles" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.699692 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.713089 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.830791 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.830965 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.831075 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.932965 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.933067 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.933118 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.933871 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.933910 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.958696 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:56 crc kubenswrapper[4984]: I0130 11:17:56.034662 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:56 crc kubenswrapper[4984]: I0130 11:17:56.524785 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:17:57 crc kubenswrapper[4984]: I0130 11:17:57.273133 4984 generic.go:334] "Generic (PLEG): container finished" podID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerID="0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79" exitCode=0 Jan 30 11:17:57 crc kubenswrapper[4984]: I0130 11:17:57.273206 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerDied","Data":"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79"} Jan 30 11:17:57 crc kubenswrapper[4984]: I0130 11:17:57.273271 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerStarted","Data":"5f1a932494ac9a3424b4a534f629ccdc1759c36a76486ae46608fece3886a242"} Jan 30 11:17:57 crc kubenswrapper[4984]: I0130 11:17:57.276225 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 11:17:59 crc kubenswrapper[4984]: I0130 11:17:59.303121 4984 generic.go:334] "Generic (PLEG): container finished" podID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerID="e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98" exitCode=0 Jan 30 11:17:59 crc kubenswrapper[4984]: I0130 11:17:59.303202 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerDied","Data":"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98"} Jan 30 11:18:01 crc kubenswrapper[4984]: I0130 11:18:01.327649 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerStarted","Data":"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7"} Jan 30 11:18:01 crc kubenswrapper[4984]: I0130 11:18:01.351875 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-24tpt" podStartSLOduration=3.257712415 podStartE2EDuration="6.351850431s" podCreationTimestamp="2026-01-30 11:17:55 +0000 UTC" firstStartedPulling="2026-01-30 11:17:57.27589674 +0000 UTC m=+3981.842200574" lastFinishedPulling="2026-01-30 11:18:00.370034736 +0000 UTC m=+3984.936338590" observedRunningTime="2026-01-30 11:18:01.350039312 +0000 UTC m=+3985.916343186" watchObservedRunningTime="2026-01-30 11:18:01.351850431 +0000 UTC m=+3985.918154275" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.035002 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.035994 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.108692 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.434543 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.501703 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:18:08 crc kubenswrapper[4984]: I0130 11:18:08.407466 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-24tpt" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="registry-server" containerID="cri-o://29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" gracePeriod=2 Jan 30 11:18:08 crc kubenswrapper[4984]: I0130 11:18:08.935834 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.043307 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") pod \"ef041e51-918d-41ef-ac7b-d2ab23b45757\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.043355 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") pod \"ef041e51-918d-41ef-ac7b-d2ab23b45757\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.043530 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") pod \"ef041e51-918d-41ef-ac7b-d2ab23b45757\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.044462 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities" (OuterVolumeSpecName: "utilities") pod "ef041e51-918d-41ef-ac7b-d2ab23b45757" (UID: "ef041e51-918d-41ef-ac7b-d2ab23b45757"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.053544 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn" (OuterVolumeSpecName: "kube-api-access-r4sxn") pod "ef041e51-918d-41ef-ac7b-d2ab23b45757" (UID: "ef041e51-918d-41ef-ac7b-d2ab23b45757"). InnerVolumeSpecName "kube-api-access-r4sxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.146368 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") on node \"crc\" DevicePath \"\"" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.146396 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423289 4984 generic.go:334] "Generic (PLEG): container finished" podID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerID="29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" exitCode=0 Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423355 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerDied","Data":"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7"} Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423396 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerDied","Data":"5f1a932494ac9a3424b4a534f629ccdc1759c36a76486ae46608fece3886a242"} Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423429 4984 scope.go:117] "RemoveContainer" containerID="29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423453 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.461534 4984 scope.go:117] "RemoveContainer" containerID="e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.489987 4984 scope.go:117] "RemoveContainer" containerID="0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.581665 4984 scope.go:117] "RemoveContainer" containerID="29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" Jan 30 11:18:09 crc kubenswrapper[4984]: E0130 11:18:09.582290 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7\": container with ID starting with 29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7 not found: ID does not exist" containerID="29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.582343 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7"} err="failed to get container status \"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7\": rpc error: code = NotFound desc = could not find container \"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7\": container with ID starting with 29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7 not found: ID does not exist" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.582373 4984 scope.go:117] "RemoveContainer" containerID="e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98" Jan 30 11:18:09 crc kubenswrapper[4984]: E0130 11:18:09.582829 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98\": container with ID starting with e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98 not found: ID does not exist" containerID="e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.582884 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98"} err="failed to get container status \"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98\": rpc error: code = NotFound desc = could not find container \"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98\": container with ID starting with e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98 not found: ID does not exist" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.582918 4984 scope.go:117] "RemoveContainer" containerID="0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79" Jan 30 11:18:09 crc kubenswrapper[4984]: E0130 11:18:09.583321 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79\": container with ID starting with 0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79 not found: ID does not exist" containerID="0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.583364 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79"} err="failed to get container status \"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79\": rpc error: code = NotFound desc = could not find container \"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79\": container with ID starting with 0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79 not found: ID does not exist" Jan 30 11:18:11 crc kubenswrapper[4984]: I0130 11:18:11.093806 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef041e51-918d-41ef-ac7b-d2ab23b45757" (UID: "ef041e51-918d-41ef-ac7b-d2ab23b45757"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:18:11 crc kubenswrapper[4984]: I0130 11:18:11.192819 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:18:11 crc kubenswrapper[4984]: I0130 11:18:11.274292 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:18:11 crc kubenswrapper[4984]: I0130 11:18:11.291886 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:18:12 crc kubenswrapper[4984]: I0130 11:18:12.113071 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" path="/var/lib/kubelet/pods/ef041e51-918d-41ef-ac7b-d2ab23b45757/volumes" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.416395 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:19:50 crc kubenswrapper[4984]: E0130 11:19:50.417755 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="registry-server" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.417773 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="registry-server" Jan 30 11:19:50 crc kubenswrapper[4984]: E0130 11:19:50.417797 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="extract-content" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.417815 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="extract-content" Jan 30 11:19:50 crc kubenswrapper[4984]: E0130 11:19:50.417847 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="extract-utilities" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.417856 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="extract-utilities" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.418139 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="registry-server" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.420011 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.427061 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.557536 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.557596 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.557631 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.659843 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.659930 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.659989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.660758 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.660770 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.694464 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.753066 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:51 crc kubenswrapper[4984]: I0130 11:19:51.219887 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:19:51 crc kubenswrapper[4984]: W0130 11:19:51.228690 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bde2870_f8fa_4a9d_89dc_5882c25fe044.slice/crio-4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5 WatchSource:0}: Error finding container 4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5: Status 404 returned error can't find the container with id 4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5 Jan 30 11:19:51 crc kubenswrapper[4984]: I0130 11:19:51.579754 4984 generic.go:334] "Generic (PLEG): container finished" podID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" containerID="3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e" exitCode=0 Jan 30 11:19:51 crc kubenswrapper[4984]: I0130 11:19:51.579963 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerDied","Data":"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e"} Jan 30 11:19:51 crc kubenswrapper[4984]: I0130 11:19:51.580151 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerStarted","Data":"4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5"} Jan 30 11:19:52 crc kubenswrapper[4984]: I0130 11:19:52.595084 4984 generic.go:334] "Generic (PLEG): container finished" podID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" containerID="1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765" exitCode=0 Jan 30 11:19:52 crc kubenswrapper[4984]: I0130 11:19:52.595312 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerDied","Data":"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765"} Jan 30 11:19:53 crc kubenswrapper[4984]: I0130 11:19:53.608416 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerStarted","Data":"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96"} Jan 30 11:19:53 crc kubenswrapper[4984]: I0130 11:19:53.629552 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5pk6x" podStartSLOduration=2.203899676 podStartE2EDuration="3.629532577s" podCreationTimestamp="2026-01-30 11:19:50 +0000 UTC" firstStartedPulling="2026-01-30 11:19:51.582416502 +0000 UTC m=+4096.148720336" lastFinishedPulling="2026-01-30 11:19:53.008049383 +0000 UTC m=+4097.574353237" observedRunningTime="2026-01-30 11:19:53.627165174 +0000 UTC m=+4098.193468998" watchObservedRunningTime="2026-01-30 11:19:53.629532577 +0000 UTC m=+4098.195836421" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.134269 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.149940 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.184562 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.205198 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.205269 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.205454 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.308038 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.308103 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.308815 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.308918 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.309272 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.349988 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.502537 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:58 crc kubenswrapper[4984]: I0130 11:19:58.144631 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:19:58 crc kubenswrapper[4984]: I0130 11:19:58.665861 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerStarted","Data":"b44973996cc05eb772cf42729fb3a64f9424ddbfb8fd891eced50913f8c45eac"} Jan 30 11:19:59 crc kubenswrapper[4984]: I0130 11:19:59.676566 4984 generic.go:334] "Generic (PLEG): container finished" podID="c429fec3-b80b-42aa-8488-74b853752056" containerID="ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b" exitCode=0 Jan 30 11:19:59 crc kubenswrapper[4984]: I0130 11:19:59.676950 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerDied","Data":"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b"} Jan 30 11:20:00 crc kubenswrapper[4984]: I0130 11:20:00.753583 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:00 crc kubenswrapper[4984]: I0130 11:20:00.755507 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:01 crc kubenswrapper[4984]: I0130 11:20:01.052438 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:01 crc kubenswrapper[4984]: I0130 11:20:01.728563 4984 generic.go:334] "Generic (PLEG): container finished" podID="c429fec3-b80b-42aa-8488-74b853752056" containerID="e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243" exitCode=0 Jan 30 11:20:01 crc kubenswrapper[4984]: I0130 11:20:01.735936 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerDied","Data":"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243"} Jan 30 11:20:01 crc kubenswrapper[4984]: I0130 11:20:01.873518 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.001470 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.002112 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.751641 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerStarted","Data":"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c"} Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.788396 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nzfnw" podStartSLOduration=4.011580263 podStartE2EDuration="6.788370293s" podCreationTimestamp="2026-01-30 11:19:57 +0000 UTC" firstStartedPulling="2026-01-30 11:19:59.680141494 +0000 UTC m=+4104.246445328" lastFinishedPulling="2026-01-30 11:20:02.456931514 +0000 UTC m=+4107.023235358" observedRunningTime="2026-01-30 11:20:03.77300756 +0000 UTC m=+4108.339311384" watchObservedRunningTime="2026-01-30 11:20:03.788370293 +0000 UTC m=+4108.354674137" Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.969494 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:20:04 crc kubenswrapper[4984]: I0130 11:20:04.757512 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5pk6x" podUID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" containerName="registry-server" containerID="cri-o://8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" gracePeriod=2 Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.729415 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768486 4984 generic.go:334] "Generic (PLEG): container finished" podID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" containerID="8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" exitCode=0 Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768521 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerDied","Data":"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96"} Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768542 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerDied","Data":"4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5"} Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768558 4984 scope.go:117] "RemoveContainer" containerID="8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768581 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.795565 4984 scope.go:117] "RemoveContainer" containerID="1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.801916 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") pod \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.802625 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") pod \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.803603 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") pod \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.804319 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities" (OuterVolumeSpecName: "utilities") pod "6bde2870-f8fa-4a9d-89dc-5882c25fe044" (UID: "6bde2870-f8fa-4a9d-89dc-5882c25fe044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.804601 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.809964 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2" (OuterVolumeSpecName: "kube-api-access-7l8l2") pod "6bde2870-f8fa-4a9d-89dc-5882c25fe044" (UID: "6bde2870-f8fa-4a9d-89dc-5882c25fe044"). InnerVolumeSpecName "kube-api-access-7l8l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.841327 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bde2870-f8fa-4a9d-89dc-5882c25fe044" (UID: "6bde2870-f8fa-4a9d-89dc-5882c25fe044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.862523 4984 scope.go:117] "RemoveContainer" containerID="3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.906585 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.906623 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.918676 4984 scope.go:117] "RemoveContainer" containerID="8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" Jan 30 11:20:05 crc kubenswrapper[4984]: E0130 11:20:05.919452 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96\": container with ID starting with 8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96 not found: ID does not exist" containerID="8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.919520 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96"} err="failed to get container status \"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96\": rpc error: code = NotFound desc = could not find container \"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96\": container with ID starting with 8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96 not found: ID does not exist" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.919555 4984 scope.go:117] "RemoveContainer" containerID="1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765" Jan 30 11:20:05 crc kubenswrapper[4984]: E0130 11:20:05.920001 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765\": container with ID starting with 1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765 not found: ID does not exist" containerID="1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.920040 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765"} err="failed to get container status \"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765\": rpc error: code = NotFound desc = could not find container \"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765\": container with ID starting with 1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765 not found: ID does not exist" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.920065 4984 scope.go:117] "RemoveContainer" containerID="3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e" Jan 30 11:20:05 crc kubenswrapper[4984]: E0130 11:20:05.920322 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e\": container with ID starting with 3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e not found: ID does not exist" containerID="3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.920359 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e"} err="failed to get container status \"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e\": rpc error: code = NotFound desc = could not find container \"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e\": container with ID starting with 3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e not found: ID does not exist" Jan 30 11:20:06 crc kubenswrapper[4984]: I0130 11:20:06.114006 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:20:06 crc kubenswrapper[4984]: I0130 11:20:06.114066 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:20:06 crc kubenswrapper[4984]: E0130 11:20:06.134752 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bde2870_f8fa_4a9d_89dc_5882c25fe044.slice\": RecentStats: unable to find data in memory cache]" Jan 30 11:20:07 crc kubenswrapper[4984]: I0130 11:20:07.503388 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:07 crc kubenswrapper[4984]: I0130 11:20:07.504099 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:07 crc kubenswrapper[4984]: I0130 11:20:07.586021 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:08 crc kubenswrapper[4984]: I0130 11:20:08.105065 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" path="/var/lib/kubelet/pods/6bde2870-f8fa-4a9d-89dc-5882c25fe044/volumes" Jan 30 11:20:17 crc kubenswrapper[4984]: I0130 11:20:17.574520 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:17 crc kubenswrapper[4984]: I0130 11:20:17.633983 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:20:17 crc kubenswrapper[4984]: I0130 11:20:17.893095 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nzfnw" podUID="c429fec3-b80b-42aa-8488-74b853752056" containerName="registry-server" containerID="cri-o://d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" gracePeriod=2 Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.300194 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.376141 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") pod \"c429fec3-b80b-42aa-8488-74b853752056\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.376348 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") pod \"c429fec3-b80b-42aa-8488-74b853752056\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.376389 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") pod \"c429fec3-b80b-42aa-8488-74b853752056\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.377349 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities" (OuterVolumeSpecName: "utilities") pod "c429fec3-b80b-42aa-8488-74b853752056" (UID: "c429fec3-b80b-42aa-8488-74b853752056"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.382468 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp" (OuterVolumeSpecName: "kube-api-access-fppfp") pod "c429fec3-b80b-42aa-8488-74b853752056" (UID: "c429fec3-b80b-42aa-8488-74b853752056"). InnerVolumeSpecName "kube-api-access-fppfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.436303 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c429fec3-b80b-42aa-8488-74b853752056" (UID: "c429fec3-b80b-42aa-8488-74b853752056"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.478665 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.478937 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.479051 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.904144 4984 generic.go:334] "Generic (PLEG): container finished" podID="c429fec3-b80b-42aa-8488-74b853752056" containerID="d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" exitCode=0 Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.904205 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerDied","Data":"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c"} Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.904238 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerDied","Data":"b44973996cc05eb772cf42729fb3a64f9424ddbfb8fd891eced50913f8c45eac"} Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.904287 4984 scope.go:117] "RemoveContainer" containerID="d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.905630 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.931649 4984 scope.go:117] "RemoveContainer" containerID="e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.954573 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.963066 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.972520 4984 scope.go:117] "RemoveContainer" containerID="ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.007798 4984 scope.go:117] "RemoveContainer" containerID="d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" Jan 30 11:20:19 crc kubenswrapper[4984]: E0130 11:20:19.011198 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c\": container with ID starting with d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c not found: ID does not exist" containerID="d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.011265 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c"} err="failed to get container status \"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c\": rpc error: code = NotFound desc = could not find container \"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c\": container with ID starting with d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c not found: ID does not exist" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.011296 4984 scope.go:117] "RemoveContainer" containerID="e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243" Jan 30 11:20:19 crc kubenswrapper[4984]: E0130 11:20:19.014363 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243\": container with ID starting with e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243 not found: ID does not exist" containerID="e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.014387 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243"} err="failed to get container status \"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243\": rpc error: code = NotFound desc = could not find container \"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243\": container with ID starting with e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243 not found: ID does not exist" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.014407 4984 scope.go:117] "RemoveContainer" containerID="ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b" Jan 30 11:20:19 crc kubenswrapper[4984]: E0130 11:20:19.014821 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b\": container with ID starting with ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b not found: ID does not exist" containerID="ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.014853 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b"} err="failed to get container status \"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b\": rpc error: code = NotFound desc = could not find container \"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b\": container with ID starting with ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b not found: ID does not exist" Jan 30 11:20:20 crc kubenswrapper[4984]: I0130 11:20:20.107528 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c429fec3-b80b-42aa-8488-74b853752056" path="/var/lib/kubelet/pods/c429fec3-b80b-42aa-8488-74b853752056/volumes" Jan 30 11:20:33 crc kubenswrapper[4984]: I0130 11:20:33.001332 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:20:33 crc kubenswrapper[4984]: I0130 11:20:33.003564 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"